Archive for the ‘Artificial General Intelligence’ Category

What is AI? | National | foxbangor.com – FOX Bangor/ABC 7 News and Stories

AI, or artificial intelligence, is a branch of computer science that is designed to understand and store human intelligence, mimic human capabilities including the completion of tasks, process human language and perform speech recognition. AI is the leading innovation in technology today and its primary goal is to eliminate tedious tasks and assist in immediately accessing extremely detailed and hyper-focused information and data.

AI has the ability to consume and process massive datasets and develop patterns to make predictions for the completion of future tasks.

While the interest in AI around the world is growing, the science poses an existential crisis for jobs, companies, whole industries and potentially human existence. In March, Goldman Sachs released a report and warned the public of the threat to jobs that AI, and ChatGPT, an artificial intelligence chatbot developed by AI research company OpenAI, poses. The report revealed that jobs with repetitive responsibilities and some manual labor are at risk for automation. The report concludes that 300 million jobs could be affected by AI.

ARTIFICIAL INTELLIGENCE FAQ

In simple terms, artificial intelligence is computer science that is capable of completing tasks that humans already perform or require human intelligence to complete.

AI uses technology to learn and recreate human tasks. Currently, in some situations, AI has the ability to perform human tasks better than we do, which poses a threat to the workforce.

While it may seem AI has only recently become popular or relevant to society, it has been used in many ways for years.

Reactive machines are task specific and a basic form of AI. They react to the input provided to them and offer the same output. In the form of reactive machines, AI does not learn new concepts. These machines apply datasets and respond with recommendations based on already existing inputs.

An example of reactive machines is the recommendations section in Netflix. whereby TV shows and movies are recommended by the streaming service to a user based on their search and watch history.

FIVE DISTURBING EXAMPLES OF WHY AI IS NOT QUITE THERE

Limited memory understands by storing previously captured and learned data and builds knowledge for the future based on its findings. An example of limited memory is self-driving cars.

Self-driving cars use signals and sensors to detect their surroundings and make driving decisions. The cars compute where pedestrians, traffic signals and low-light conditions exist, in order to drive more cautiously and avoid accidents or traffic errors.

Theory of mind means that humans have thoughts, feelings, emotions, desires, etc. that impact their day-to-day behaviors and decisions. While early adaptations of AI struggled with theory of mind, it has since made astonishing improvements. In order for AI to procure theory of mind, it must understand that everyone has feelings and develop the ability to change its behaviors as humans do.

An example of theory of mind for humans is to see a wilted plant and understand that it needs to be watered in order to survive. In order for AI to have theory of mind, it will need to do the same.

AI, ChatGPT specifically, has passed a theory of mind test commensurate with 9-year-old ability, as of February 2023.

Finally, when AI is self-aware, the stages of development will be complete. Self-awareness for AI is the most challenging of all AI types as the machines will have achieved human-level consciousness, emotions, empathy, etc. and can commiserate accordingly.

Once the machine has learned to be self-aware, it will have the ability to form its own identity.

This stage of self-awareness is not currently possible. In order for self-awareness to become a possibility, scientists will need to find a way to replicate consciousness in a machine.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

Challenger, Gray & Christmas, a coaching company in Chicago, found in an April report that ChatGPT could replace 4.8 million jobs in the future. Specifically, ChatGPT would replace job roles that are repetitive and predictable including copywriters, customer service representatives, cashiers, data clerks, drivers and more.

Individuals with graduate degrees are most fearful of losing their jobs to AI and nearly 69% of them emphasized their fear of it, according to a Tidio survey. While humans are becoming increasingly alarmed by AI, we are already using it in our daily lives in ways people might not even realize.

Here are some of the most popular and typical ways were already leveraging AI.

Facial recognition is being used mostly by law enforcement to identify criminals and assess potential threats. Individuals use it daily to access smart devices and through social media like Facebook photo tag recommendations.

Determining violations of community guidelines, facial recognition, and translation tools for language interpretation are just a few ways social media is operating alongside AI.

Google Home, Amazon Alexa and Apple Siri are all examples of voice assistants that employ AI. Voice assistants use natural language processing and are capable of discovering patterns and behaviors among users in order to conserve preferences and offer results to consumers. The more you use them, the more the voice assistant will learn.

ARE YOU READY FOR AI VOICE CLONING ON YOUR PHONE?

Smart home devices are used in a variety of ways including the protection and security of your home. Technology like Ring doorbells and Nest security systems use AI to detect movement and alert homeowners.

Voice assistants like Siri and Alexa are also examples of smart devices.

Search engines like Google, Bing and Baidu use AI to improve search results for users. Recommended content based on initial search terms are provided to users every time they search. Search engines use natural language processing, a branch of AI, to recognize search intent in order to provide exemplary results.

For example, if you search for "rose" results for the pink wine rose, the flower rose, Rose the singer or rose the verb may appear. When you provide context to your search, AI assimilates and suggests results.

If youre using Google to query "Marylin Monrow," the search engine giant suggests the correct search term and results for "Marilyn Monroe." Search engines are using AI to grasp spelling, context, language and more in order to best satisfy users.

AI is also the power behind the rapid adaptation of search results. Trillions of searches are performed every year and humans dont have the ability to comb through results but AI does.

When you come home from a long day at work to relax on the couch and throw on Netflix, youre leveraging AI to help you choose the next TV show or movie youll watch. When you log onto Instagram or Facebook and a suggested list of new followers or friends appears, youre experiencing the power of AI. When you open your Google Maps app and type "gas" into the search bar to locate the closest gas station near you, youre using AI to make your life easier.

Artificial narrow intelligence or ANI is also known as "Weak" AI. ANI systems are capable of handling singular or limited tasks and are the exact opposite of strong AI, which handles a wide range of tasks.

Example of ANI include Apples Siri, Netflix recommendations and the weather app where you can check the weather for the day or the week. While Siri has the ability to assist with numerous tasks like announce calls or text messages, play music, shortcut smart device apps and more, it struggles with tasks outside its immediate capabilities.

ANI systems are not self-aware or and do not possess genuine intelligence, according to deepAI.org.

ANI uses datasets with specific information to complete tasks and cannot go beyond the data provided to it Though systems like Siri are capable and sophisticated, they cannot be conscious, sentient or self-aware.

"LLMs have a broader set of capabilities than previous narrow AIs, but this breadth is limited," said Ben Goertzel, expert in Artificial General Intelligence, in a Fox News Digital Opinion article. "They cannot intelligently reason beyond their experience-base. They only appear broadly capable because their training base is really enormous and covers almost every aspect of human endeavor."

Artificial general intelligence or AGI is AI that can perform any intellectual task a human can, according to medium.com. AGI capabilities vary from consciousness to self-awareness. We have seen adaptations of life with AGI in movies like "Her" and "Wall-E."

In the Pixar animation film "Wall-E," the sad, lonely robot meets another, Eve, and they fall in love. In this film, while the characters are sentient, they are AGI systems. In addition to "Wall-E," the 2013 film "Her" stars Joaquin Phoenix. "Her" is also an AGI system as she outgrows her first owner and goes out to be on her own.

AGI systems learn, execute, reason, and more but do not experience consciousness.

CLICK HERE TO GET THE FOX NEWS APP

Artificial superintelligence or ASI is the type of AI most people are fearful of. It will have the ability to surpass human intelligence in a number of ways including creativity, self-awareness, problem-solving and more. ASI, if ever created, will have the ability to be sentient. While people are worried about AI becoming sentient, the technology is years away from such capabilities.

In 2018 at South by Southwest tech conference SXSW in Austin, Texas, Elon Musk expressed his concerns over AI and regulations regarding the development of ASI.

Read the original here:

What is AI? | National | foxbangor.com - FOX Bangor/ABC 7 News and Stories

The Politics of Artificial Intelligence (AI) – National and New Jersey … – InsiderNJ

On May 27, former Secretary of State Henry Kissinger will attain the age of 100. Over the last few months, I have been involved in authoring an historical essayKissinger at 100 His Complex Historical Legacy.

The essay is scheduled to be published around the time of Kissingers birthday by the Jandoli Institute, the public policy center for the Jandoli School of Communication at St. Bonaventure University. The institutes executive director is Rich Lee, a former State House reporter who also served as Deputy Communication Director for former Governor Jim McGreevey. I will also be developing a podcast regarding my essay.

For me, this project is truly a career capstone, utilizing all my analytic skills developed over a lifetime. This includes, inter alia, my studies as a political science honors scholar as a Northwestern University undergraduate, my service as a Navy officer, my years as a corporate and private practice attorney, my career as a public official, including my leadership of two major federal and state agencies, my accomplishments as a college professor, and my most recent post-retirement career as an opinion journalist.

Whether one is an admirer or critic of Dr. Henry Kissinger, there is no question that he has been a transformative figure, with a greater impact on American history than any 20th century American other than our presidents. Researching his life and career is truly a Sisyphean endeavor.

Kissinger has authored thirteen books, a plethora of articles, and numerous media appearances. In jocular fashion, I have told friends and family members that researching Henry Kissinger is like studying the Torah you never finish it!

So about a month ago, I thought that I had finished all my Kissinger research until I had the good fortune to meet with a friend of mine who also, unbeknownst to me, was a friend of Henry Kissinger. When I informed him of my Kissinger project, he proceeded to display for me on his I phone numerous photos of him and the legendary Dr. K!

Then, he asked me what were my research sources. I proudly told him the list of my readings, video tape viewings, and interviews. He responded by saying, Very good, but you have a critical omission. You did not read the book, The Age of AI (artificial intelligence) and Our Human Future.

The book was co-authored by Henry Kissinger, Eric Schmidt, former CEO of Google, and Daniel Huttenlocher, the Inaugural Dean of the MIT Schwarzman College of Computing. For ease of reference, and with all due respect to his co-authors, I will refer to this work as the Kissinger AI book.

I told my friend that I was aware of the book, but I had chosen not to include it in my essay because of my focus on Kissinger as a foreign policy maker and diplomat. My friend, however, admonished me, You do not understand. For Henry, his involvement with AI is a legacy item.

So I immediately ordered the book. My friend was correct. The Kissinger AI book should be a must read for high governmental officials, New Jersey and federal. Every New Jersey cabinet member and authority executive director should have this book on his or her desk.

Within the last month, AI has become a growing arena of national focus, sparked in large part by the resignation of Dr. Geoffrey Hinton from his job at Google. Dr. Hinton is known as the Godfather of AI. He resignedso he can freely speak out about the risks of AI. A part of him, he said, now regrets his lifes work.

In New Jersey, late last year, a bill was introduced in the Assembly, A4909, which would mandate thatemployers could use only hiring software that has been subjected to a bias audit, which looks for any patterns of discrimination. It would require annual reviews of whether programs comply with state law.

The bill was generated because of increasing concern that a growing number of AI systems had either a gender, racial, or disability bias. As an example,Reuters reported in 2018that Amazon had stopped using an AI recruiting tool because it penalized applicants with resumes that referred to womens activities or degrees from two all-womens colleges.

In February, NorthJersey.com journalist Daniel Munoz authored a comprehensive column dealing with AI and its potential dangers and biases in the hiring process. Included in the column was an interview with Assemblywoman Sadaf Jaffer (D-Mercer) a prime sponsor of this legislation.

It should be noted that the Kissinger AI book strongly recommends the auditing of AI systems by humans, rather than self-auditing by machines themselves. The human auditing can both increase the effectiveness of the AI while mitigating its dangers.

And today, on Twitter, Assembly Majority Leader Lou Greenwald (D-Camden) stated as follows: The power that Artificial Intelligence possesses makes it a potentially dangerous tool for people looking to spread misinformation. This is why I will be introducing legislation that looks to limit the harmful uses it has on election campaigns.

The beneficial effects of AI are real, as are the dangers. The politics of AI is the subject of increasing focus at both the national and New Jersey level.

The Kissinger AI book is highly relevant to all AI issues, both federal and state. The three-fold focus of the book makes it an indispensable basic guide to AI politics.

First, it gives a concise, contextual definition of AI. Second, it describes in depth the potential benefits and dangers of AI. Third, it proposes some solutions of a beginning nature to deal with the emerging negative impacts of AI.

In terms of contextual definition, the Kissinger AI book describes two empirical tests of what constitutes AI.

The first is the Alan Turing test, stating that if a software process enabled a machine to operate so proficiently that observers could not distinguish its behavior from a humans, the machine should be labeled intelligent.

Second is the John McCarthy test, defining AI as machines that can perform tasks that are characteristic of human intelligence.

The Kissinger AI book also describes the impact of AI on the reasoning process, so integral to decision making. The three components of reason are information, knowledge, and wisdom. When information becomes contextualized, it leads to knowledge. When knowledge leads to conviction, it becomes wisdom. Yet AI is without the reflection and self-awareness qualities that are essential to wisdom.

This lack of wisdom, combined with three essential features of AI magnifies its enormous danger in certain situations: 1) Its usefor both warlike and peaceful purposes; 2) its massive destructive force; and 3) its capacity to be deployed and spread easily, quickly, and widely.

The most alarming feature of AI is on the horizon: the arrival of artificial general intelligence (AGI). This means AI capable of completing any intellectual task humans are capable of, in contrast to todays narrow AI, which is developed to complete a specific task.

It is the growing capacity of unsupervised self-learning by AI systems which is facilitating the potential of the arrival of AGI. With AGI comes autonomy and autonomy in weapons systems increases the potential for accidental war.

The potential of AI leading to accidental war, along with the two above mentioned dangers publicized in New Jersey of AI generated job discrimination and political disinformation are the negative aspects of AI which will receive the most focus in the forthcoming debate.

Yet AI is not without its extremely beneficial uses, most notably in the development of new prescription drugs. So the obvious task of government, federal and state, is to filter out the dangers and facilitate the beneficial uses.

As a first step, the Kissinger AI book recommends that new national governmental authorities be created with two objectives: 1) America must remain intellectually and strategically competitive in AI; and 2) Studies should be undertaken to assess the cultural implications of AI.

In New Jersey, the best way to governmentally meet this challenge would be to create a new cabinet level Department of Science, Information, and Technology.

We currently have in New Jersey the Commission on Science, Information, and Technology, which with limited funding does a most commendable job in fulfilling its mission, namely: Responsibility for strengthening the innovation economy within the State, encouraging collaboration and connectivity between industry and academia, and the translation of innovations into successful high growth businesses.

A Department of Science, Information, and Technology would have three additional powers: 1) Regulatory powers regarding auditing, self-learning, and AGI; and 2) the ability to commission more in-depth studies regarding AI cultural impact; and 3) the ability to coordinate scientific policy throughout the executive branch. Obviously, an increased level of funding would be necessary to execute these three functions.

I also have a recommendation for the first New Jersey Commissioner of Science, Innovation, and Technology, State Senator Andrew Zwicker (D-Middlesex). His brilliance and competence as a scientist as demonstrated from his service at the Princeton Plasma Laboratory and his proven integrity and ethics in state government make him an ideal candidate for this role.

And to Henry Kissinger, my fellow Jew, I say to you: Mazal Tov on your 100th birthday! And like Moses in the Torah, may you live at least 120 years!

Alan J. Steinberg served as regional administrator ofRegion2 EPA during the administration of former President George W. Bush and as executive director of the New Jersey Meadowlands Commission.

(Visited 322 times, 7 visits today)

Visit link:

The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ

As AutoGPT released, should we be worried about AI? – Cosmos

A new artificial intelligence tool coming just months after ChatGPT appears to offer a big leap forward it can improve itself without human intervention.

The artificial intelligence (AI) tool AutoGPT was released by the same company, OpenAI, which brought us ChatGPT last year. AutoGPT promises to overcome the limitations of large language models (LLMs) such as ChatGPT.

ChatGPT exploded onto the scene at the end of 2022 for its ability to respond to text prompts in a (somewhat) human-like and natural way. It has, caused concern for occasionally including misleading or incorrect information in its responses and for its potential to be used for plagiarising assignments in schools and universities.

But its not these limitations that AutoGPT seeks to overcome.

AI is categorised as weak (narrow) or strong (general). As an AI tool designed to carry out a single task, ChatGPT is considered weak AI.

AutoGPT is created with a view to becoming a strong AI, or artificial general intelligence, theoretically capable of carrying out many different types of task, including those for which it wasnt originally designed to perform.

LLMs are designed to respond to prompts produced by human users. They then respond to that and await the next prompt.

AutoGPT is being designed to give itself prompts, creating a loop. Masa, a writer on AutoGPTs website, explains: It works by breaking a larger task into smaller sub-tasks and then spinning off independent Auto-GPT instances in order to work on them. The original instance acts as a kind of project manager, coordinating all of the work carried out and compiling it into a finished result.

But is a self-improving AI a good thing? Many experts are worried about the trajectory of artificial intelligence research.

The respected and influential British Medical Journal has published an article titled Threats by artificial intelligence to human health and human existence in which they explain three key reasons we should be concerned about AI.

Get an update of science stories delivered straight to your inbox.

Threats identified by the international team of doctors and public health experts, including those from Australia, relate to misuse of AI and the impact of the ongoing failure to adapt to and regulate the technology.

The authors note the significance of AI and its potential to have transformative effect on society. But they also warn that artificial general intelligence in particular poses an existential threat to humanity.

First, they warn of the ability of AI to clean, organise, and analyse massive data sets including of personal data such as images. Such capabilities could be used to manipulate and distort information and for AI surveillance. The authors note that such surveillance is in development in more than 75 countries ranging from liberal democracies to military regimes, [which] have been expanding such systems.

Second they say Lethal Autonomous Weapon Systems (LAWS) capable of locating, selecting, and engaging human targets without the need for human supervision, could lead to killing at an industrial scale.

Finally, the authors raise concern over the loss of jobs that will come from the spread of AI technology in many industries. Estimates are that tens to hundreds of millions of jobs will be lost in the coming decade.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, they write.

The authors highlight artificial general intelligence as a threat to the existence of human civilisation itself.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they write.

Here is the original post:

As AutoGPT released, should we be worried about AI? - Cosmos

Opinion | We Need a Manhattan Project for AI Safety – POLITICO

At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.

Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.

Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.

Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.

That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.

As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.

Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.

Dont count on the normal channels of government to save us from that.

Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.

A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.

Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo

Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?

Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.

One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.

Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.

The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.

With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:

1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.

2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.

4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.

5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.

The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.

The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.

The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.

Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?

More here:

Opinion | We Need a Manhattan Project for AI Safety - POLITICO

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

A DAD who created a billion-pound start-up business has revealed the secret to his success.

Emad Mostaque, 40, is the founder and CEO of artificial intelligence giant Stability AI and has recently been in talks with the likes of Elon Musk and Jeff Bezos.

But the London dad-of-two has worked hard to get where he is today - and doesn't plan on stopping any time soon.

Emad has gone from developing AI at home to help his autistic son, to employing 150 people across the globe for his billion-pound empire.

The 40-year-old usually calls Notting Hill home, but has started travelling to San Francisco for work.

On his most recent trip, Emad met with Bezos, the founder and CEO of Amazon, and made a deal with Musk, the CEO of Twitter.

He says the secret to his success in the AI world is using it to help humans, not overtake them.

Emad told The Times: I have a different approach to everyone else in this space, because Im building narrow models to augment humans, whereas almost everyone else is trying to build an AGI [Artificial general intelligence] to pretty much replace humans and look over them.

Emad is from Bangladesh but his parents shifted to the UK when he was a boy and settled the family in London's Walthamstow.

The dad said he was always good at numbers in school but struggled socially as he has Aspergers and ADHD.

The 40-year-old studied computer science and maths at Oxford, then became a hedge fund manager.

But when Emad's son was diagnosed with autism he quit to develop something to help the youngster.

Emad recalled: We built an AI to look at all the literature and then extract what could be the case, and then the drug repurposing.

He says that homemade AI allowed his family create an approach that took his son to a better, more cheerful place.

And, as a result, Emad inspired himself.

He started a charity that aims to give tablets loaded with AI tutors to one billion children.

He added: Can you imagine if every child had their own AI looking out for them, a personalised system that teaches them and learns from them?

"In 10 to 20 years, when they grow up, those kids will change the world.

Emad also founded the billion-pound start-up Stability AI in recent years, and it's one of the companies behind Stable Diffusion.

The tool has taken the world by storm in recent months with its ability to create images that could pass as photos from a mere text prompt.

Today, Emad is continuing to develop AI - and he says it is one of the most important inventions of history.

He explained it as somewhere between fire and the internal combustion engine.

Read the rest here:

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun