Here we look at AI and AGI (Artificial General Intelligence) in areas that could leave you scratching your head.
AI can read your mind
Mind reading may soon be possible: a team of University of California, San Francisco (UCSF) scientists has developed artificial intelligence (AI) that can translate someone’s brain activity into text, according to Interesting Engineering.
“We are not there yet but we think this could be the basis of a speech prosthesis,” said co-author on the paper Dr. Joseph Makin from UCSF.
Four participants had electrode implants placed in their brains and were asked to repeat 50 different sentences aloud while their brain activity was being tracked by the monitor.
Once the (brain activity) data from the 50 spoken sentences was collected it was placed into a machine-learning algorithm, which in turn changed them to a string of numbers and compared them to the audio recording. After a while, the system was able to convert the numbers into English sentences.
A sentence like “Those musicians harmonize marvelously” was translated as “The spinach was a famous singer”, but it’s a start.
Super drug screening AI
NVIDIA is bringing its AI expertise to bear in the fight against COVID-19, the company announced, according to Engadget.
Specifically, the first of NVIDIA’s new line of AI-driven supercomputing systems, the DGX A100, will be sent to Argonne National Lab where it will help government researchers screen potential coronavirus therapeutics far faster than they could do by hand.
The DGX A100 boasts 5 petaflops of computing power and will help researchers explore treatments and vaccines and study the spread of the virus, enabling scientists to do years’ worth of AI-accelerated work in months or days and will “be able to screen 1 billion drugs in under 24 hours.”
A whole new AI lexicon
The result is a plausible stream of vocabulary worthy of Wikipedia consideration.
Some words sound like modern managerial nonsense (“deleveragement – the action of humiliating someone by allowing them to remain silent”).
Others hint at a genuine etymological history (“sabbatory – an institution devoted to the study of mystical religious learning”), while others exude a powerful air of mystery and calm (“cheeless – of or covered with a layer of stone, bark, or other organic matter”).
One-shot, AI-generated webpages have been a thing for a while now, starting with the startling fictional faces of ThisPersonDoesNotExist.com, and including more esoteric examples such as ThisArticleDoesNotExist.com.
ThisWordDoesNotExist was created by San Francisco-based developer Thomas Dimson (a former principal engineer at Instagram who designed the app’s recommendation algorithms), and it uses the AI language framework known as GPT-2, which was made by AI lab OpenAI and unveiled last February.
AI with a conscious mind
Interesting Engineering delved this time into technically defined, artificial general intelligence or AGI is a machine that has the capacity to understand or learn intellectual tasks equally to how humans can.
This is the main difference from highly specialized AI, good at only one task, like playing chess, performing data analytics, or answering typical questions like with Chatbots.
Can AI have emotional intelligence? Like, if someone waves a white flag in battle, a computer might recognize it as what it is, a white flag waving. However, our emotional intelligence gives us context and understanding that the waving of the white flag is likely a call for surrender.
A hard task and ask, but maybe one day in the not so distant future.
Also Machine AIs are around about the same level as a four-year-old toddler when it comes to taking IQ tests, so they’re not quite to human level of deduction yet, either.
So, true intelligence incorporates the ability to problem-solve and understand with the ability to interpret and read between the lines. This is also true on not only the receiving side, but also on the giving side. Meaning, in order for computers to have artificial general intelligence, they need to not only understand human tone and context, but they also need to be able to dish it out.
Machines and AIs are making progress in these areas of intelligence. Natural Language Process and Generation algorithms are bringing us closer to AIs that can talk and sound like us, at least on the surface. Google Home and Amazon Alexa are giant data pools that allow programmers to design better and better AIs.
But what about Artificial consciousness?
Can AGI ever achieve consciousness in the same way humans can? And if it could, would we need to treat it as a person?
Scientifically speaking, consciousness comes directly from biological input being interpreted and reacted to by a biological creature, such that the creature becomes its own thing.
One thing that defines human consciousness is the ability to recall memories and dream about the future. In many aspects, this is a uniquely human capability. If a machine could do this, then we might define it as having AGI.
If a computer could dream for itself, not because it was programmed to do so, this might be the biggest indicator that AGI is here.
AGI needs to be able to process and understand emotions, problem solve, express forms of emotion, and perhaps most importantly, it needs to have a rough consciousness.
How far are we from that?
From a technology perspective, we’re pretty far off, like a few decades off and experts say the first rough AGI is to be created by around 2030, but add that it won’t be until 2060 until AGI has gotten good enough to pass a “consciousness test”.