About Us

In 2018, Google CEO Sundar Pichai said something that caused a lot of people to take notice: “AI is the most important thing humanity is working on right now. It’s more profound than electricity or fire.” While some were quick to dismiss the comment as hyperbole, the truth is that Pichai may be onto something.

The speed at which AI has advanced over the past few years has been nothing short of breathtaking. It’s been fueled by the fact that neural networks are “scalable,” meaning they can be trained to perform ever more complex tasks, and their performance continues to improve as they do so. In some cases, the performance of these systems has exceeded even the wildest predictions. But while the current capabilities of AI are incredible, the system is still far from perfect. It makes many basic mistakes that humans wouldn’t make, and will unhesitatingly assert false things. These limitations can be overcome, but it will require a major investment of time and money. And with billions of dollars now being staked on blowing past those limits, the race is on.

Currently, there’s no central agency in charge of overseeing the development of AI. Viruses are carefully studied in bio-safety labs, and uranium is enriched under strict supervision, but the race to develop AI has been left largely unregulated—even as the smartest minds acknowledge that there’s a one in 10 chance that their work could lead to human extinction.

This lack of oversight may be part of the problem. With no one around to check on their progress, tech companies are developing AI that they know will be much more dangerous than nuclear weapons. And they’re betting billions of dollars that their work will soon surpass the current limitations of the system—despite the fact that current language models lack common sense, make basic mistakes about the world a child wouldn’t, and are quick to assert things that are false.

A recent article in Scientific American argues that this risk is a big reason why we need to establish a centralized research authority that would study AI systems, examine the way they think and make decisions, and endow them with a kind of self-awareness. It would be a bit like a safety regulator for the human brain, and it might be the best way to ensure that we don’t wind up with a Terminator-style robot that destroys everything in its path.

At OmniVoid, the team of top-tier engineers are at the forefront of AI and Extended Reality (XR), a technology that blurs the lines between the digital and physical worlds. With a shared passion for innovation and the drive to change the world, OmniVoid is quickly becoming a force to be reckoned with in the tech industry. omnivoid ai

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *

Related Posts