In his keynote delivered on the second day of the AIBC Balkans & CIS 2023, CEO of World Crypto Council, Jayden Sage leads us through the promises and perils of AI.
Jayden kicks things off by asking the audience to do a little bit of a thought experiment. He asks them to imagine whether if they held hands they could access the knowledge base from the room for themselves, expanding out to the entire conference, and eventually to Cyprus, then Europe, and ultimately – the world.
This, he says, is the power of AI.
He goes on to caution the audience of the risks it brings. ”What you’ve seen out there is Chat GPT, but the AI field is actually quite old. This is just the first application and it’s already made people go sideways.”
“So what’s in the pipeline is not just what you see from Open AI. Microsoft, Google, they all have versions that are far more powerful than the ones that you see outside. Most of these generative AI or artificial intelligence programs, the more advanced ones, are already asking questions about self preservation.”
Worrying about self preservation, he says, is the number one sign of what they are capable of, because that means that they’re getting closer and closer to achieving sentience – they’re coming alive and as we all know from Moore’s Law, the computing power doubles every 18 months.
He goes on to explain that AI offers a concept called inference. It will know you better than you know yourself. How? Because it can detect patterns. And the more pattern recognition capacity you have, the more intelligent you’re perceived to be.
“AI is a mirror to humanity because we are the creators of it, and if we infuse positivity into it and the things that we need to build a better human race, then we’ll be OK.”
The big takeaway he opines is that the challenges are going to stem from the bad things that we put into AI.
Also problematic, he says, is that when AI starts writing code (which technically it already can), then we will not be able to go in there and fix the code because we won’t understand the code, so it’s essential that from the beginning we integrate the right themes into AI.
“Once this genie is out of the bottle, it will have a life of its own and we will not be able to pull the plug. And unlike a genie that grants you wishes, it’ll be the genie that goes after you. That’s the big risk.”
In addition to this, Jayden warns that the tech is moving very, very fast. Most people, he explains, weren’t aware that Google’s Bard, when it came out, was, essentially, a dumbed down fast version to compete with Open AI’s product. They have a much more capable one called Lambda, he goes on to say, and if you want to learn how scary Lambda is, you can Google that and see the conversation it had with one of its engineers.
“Just remember, it’s about us. When you point a finger, 3 fingers are pointing back at you. The problem is that in the US, if we build regulations to limit AI prowess and speed and trajectory, that is exactly the point where our competitors nationally will say, OK, that is a place that we can jump right into and gain an edge over other countries. So if we limit ourselves then we also shoot ourselves in the foot.”
Lawmakers, he suggests, need to walk a very fine line. Either way, though, technology keeps moving forward.
“If America keeps doing it, then China will not have that power or that ability to protect itself. It’s delusional for Europe to think that they can just put in all of these limits. They can only afford to put in those limits if America doesn’t put those limits. If America puts the limit, Europe puts a limit. There’s new world powers to be born.”
Despite this, he is confident that the risk is worth it, saying: “I’m positive that we’re going to use AI for all the positives if we can cure cancer alone through AI, then that is something that’s worth the risk that we’re about to take.
“The worst thing we can do is stand in its way and say no, no, no, you can’t leave. Then watch out.”