Archive for the ‘Artificial Intelligence’ Category

Windfall Geotek to Initiate CARDS Artificial Intelligence analysis within the Kirkland Lake Mining camp to Generate New Gold Targets – TheNewswire.ca

Brossard, Quebec - The Newswire - July 16, 2020 - Windfall Geotek (TSXV:WIN) is a leader in the use of Artificial Intelligence (AI) in the mining sector for digital exploration and is pleased to announce that it has started to analyze the data rich Kirkland Lake Mining camp using its CARDS Artificial Intelligence (AI) technology. The project area is approximately 932 km2 and hosts many major gold discoveries and producers.

Michel Fontaine President & CEO of Windfall Geotek states: "We are confident we can replicate the big success we had in Red Lake given the abundance and the quality of public data available in the Kirkland Lake Mining camp. Our team will use our CARDS AI tool to thoroughly examine all available assays, drill holes and mag survey data to identify high probability, high similarity targets based on the digital signature of known deposits in the area. We will then be in a great position to conclude a strategic alliance in the near future and continue to draw attention to Windfall Geotek".

Highlights of CARDS AI analysis at the Kirkland Lake area

- The Kirkland Lake Mining Camp is in Northeastern Ontario within the Abitibi Greenstone Belt and the Abitibi Gold Belt. Major structures within the camp include the Kirkland Lake Break and Cadillac Larder Lake Break which runs from Kirkland Lake, Ontario into Val d'Or Quebec, approximately 200 km.

- CARDS AI will build gold pattern signatures in one of the most prolific mining camps in Ontario.

- The project covers a total area of 932.45 km2.

- The project hosts many known gold deposits: Kirkland Lake, Kerr-Addison-Chesterville, Macassa, Young-Davidson, McBean, Upper Canada, Omega, Eastmaque, Teck-Hughes.

- Geophysical data (Mag+DEM) at 15m resolution from the Kirkland Lake-Larder Lake area survey will be utilized (GDS 1053, Ontario Geological Survey)

- Up to 4,771 gold training points originated from Ontario Drill Hole Database will be utilized (Ontario Geological Survey)

- Project will yield initial results within 6 to 8 weeks.

Click Image To View Full Size

Figure 1. Map view of the Kirkland Lake camp where CARDS AI will be used following a successful Red Lake project.

Dinesh Kandanchatha Chairman of Windfall Geotek states: "We are very pleased with the way that our CARDS AI is performing to date. With this internal project we will continue to demonstrate the power of our new business model, while building assets and value for our shareholders".

Windfall Geotek also would like to welcome Nathan Tribble onto the Board of directors today and wish Mr. Jacques Letendre all the best on his future endeavors. Mr. Tribble, P.Geo. (ON) has over 14 years of professional experience in exploration and mining, with a particular focus on gold and base metal exploration and project evaluation. His current position is Vice President Exploration at Gatling Exploration Inc and past experience includes Senior Principal Geologist for Sprott Mining, Senior Geologist for Bonterra Resources, Jerritt Canyon Gold, Kerr Mines, Northern Gold, Lake Shore Gold and Vale Inco. Mr. Tribble sits on multiple boards affiliated within the mining industry, is registered as a Professional Geoscientist in Ontario and holds a Bachelor of Science degree in Geology from Laurentian University.

About Windfall Geotek - Powered by Artificial Intelligence (AI) since 2005

Windfall Geotek is a service company using Artificial Intelligence (AI) with an extensive portfolio of shares of its clients. Windfall Geotek can count on a multidisciplinary team that includes professionals in geophysics, geology, Artificial Intelligence, and mathematics. The Company objectives is to develop a new royalty stream by significantly enhancing and participating in the exploration success rate of Mining and to continue the Land Mine detection application as a high priority.

For further information, please contact:

Michel Fontaine

President & CEO of Windfall Geotek

Telephone: 514-994-5843

Email: michel@windfallgeotek.com

Website: http://www.windfallgeotek.com

Additional information about the Company is available under the Windfall Geotek profile on SEDAR at http://www.sedar.com. Neither the TSX Venture Exchange nor does its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accept responsibility for the adequacy or accuracy of this release.

FORWARD LOOKING STATEMENTS This news release may contain forward-looking statements. Forward looking statements are statements that are not historical facts and are generally, but not always, identified by the words "expects", "plans", "anticipates", "believes", "intends", "estimates", "projects", "potential" and similar expressions, or that events or conditions "will", "would", "may", "could" or "should" occur. Although the Company believes the expectations expressed in such forward-looking statements are based on reasonable assumptions, such statements are not guarantees of future performance and actual results may differ materially from those in forward looking statements. Forward-looking statements are based on the beliefs, estimates and opinions of the Company's management on the date such statements were made. The Company expressly disclaims any intention or obligation to update or revise any forward-looking statements whether as a result of new information, future events or otherwise. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of TSX Venture Exchange) accepts responsibility for the adequacy of accuracy of this release

NOT FOR DISSEMINATION IN THE UNITED STATES OR FOR DISTRIBUTION TO U.S. NEWSWIRE SERVICES AND DOES NOT CONSTITUTE AN OFFER OF THE SECURITIES DESCRIBED HEREIN

Link:
Windfall Geotek to Initiate CARDS Artificial Intelligence analysis within the Kirkland Lake Mining camp to Generate New Gold Targets - TheNewswire.ca

What Defines Artificial Intelligence? The Complete WIRED …

Artificial intelligence is overhypedthere, we said it. Its also incredibly important.

Superintelligent algorithms arent about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. Its why you can talk to your friends as an animated poop on the iPhone X using Apples Animoji, or ask your smart speaker to order more paper towels.

Tech companies heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves training computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a persons retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

Theres evidence that AI can make us happier and healthier. But theres also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future wont automatically be a better one.

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. We think that a significant advance can be made, he wrote with his co-organizers, if a carefully selected group of scientists work on it together for a summer.

Moments that Shaped AI

1956

The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.

1965

Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.

1975

Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.

1987

A Mercedes van fitted with two cameras and a bunch of computers drives itself 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.

1997

IBMs computer Deep Blue defeats chess world champion Garry Kasparov.

2004

The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.

2012

Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.

2016

AlphaGo, created by Google unit DeepMind, defeats a world champion player of the board game Go.

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasnt long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the Embryo of Computer Designed to Read and Grow Wiser. But neural networks tumbled from favor after an influential 1969 book co-authored by MITs Marvin Minsky suggested they couldnt be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

View post:
What Defines Artificial Intelligence? The Complete WIRED ...

What is Artificial Intelligence (AI)? – Definition from …

While AI often invokes images of the sentient computer overlord of science fiction, the current reality is far different. At its heart, AI uses the same basic algorithmic functions that drive traditional software, but applies them in a different way.

A standard warehouse management system, for example, can show the current levels of various products, while an intelligent one could identify shortages, analyze the cause and its effect on the overall supply chain, and even take steps to correct it.

Artificial intelligence can be allowed to replace a whole system, making all decisions end-to-end, or it can be used to enhance a specific process.

For example, analyzing video footage to recognize gestures, or replacing peripheral devices (keyboard, mouse, touchscreen) with a speech to text system., giving the impression that one is interacting with a sentient being.

Just as philosophers debate the nature of man and the existence of free will, computer science experts debate the various types of AI.

Capable of performing only a limited set of predetermined functions; think, autonomous cars, retail kiosks, etc.;

Said to equal the human minds ability to function autonomously according to a wide set of stimuli;

Which will one day exceed human intelligence (and conceivably take over the world).

At the moment, Narrow AI is only beginning to enter mainstream computing applications.

Can only react to existing situations, not past experiences.

Relies on stored data to learn from recent experiences to make decisions.

Capable of comprehending conversational speech, emotions, non-verbal cues and other intuitive elements;

Human-level consciousness with its own desires, goals and objectives.

A good way to visualize these distinctions would be an AI-driven poker player. A reactive machine would base decisions only on the current hand in play, while a limited memory version would consider past decisions and player profiles.

Using Theory of Mind, however, the program would pick up on speech and facial cues, and a self-aware AI might start to consider if there is something more worthwhile to do than play poker.

AI is currently being applied to a range of functions both in the lab and in commercial/consumer settings:

Allows intelligent systems to convert human speech into text or code.

A subset of speech recognition, enables conversational interaction between humans and computers.

Allows a machine to scan an image and identify it using comparative analysis.

Perhaps the most revolutionary aspect of AI, however, is that it allows software to rewrite itself as it adapts to its environment.

Unlike traditional upgrade programs that take years and are often buggy, or even newer DevOps processes that push changes quickly with less disruption, AI allows a given program to optimize itself to highly specialized use cases.

This should not only lower the cost of software licensing and support, it should provide steadily improving performance and the development of unique processes that deliver crucial advantages in an increasingly competitive economy.

]]>

Read this article:
What is Artificial Intelligence (AI)? - Definition from ...

7 Ways An Artificial Intelligence Future Will Change The …

[AI] is going to change the world more than anything in the history of mankind. More than electricity. AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018

In a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but growing crew of IFM/Onetrack.AI have one rule that rules them all: think simple. The words are written in simple font on a simple sheet of paper thats stuck to a rear upstairs wall of their industrial two-story workspace. What theyre doing here with artificial intelligence, however, isnt simple at all.

Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi punches some keys on a laptop to pull up grainy video footage of a forklift driver operating his vehicle in a warehouse. It was captured from overhead courtesy of a Onetrack.AI forklift vision system.

Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted as the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future.

Employing machine learning and computer vision for detection and classification of various safety events, the shoebox-sized device doesnt see all, but it sees plenty. Like which way the driver is looking as he operates the vehicle, how fast hes driving, where hes driving, locations of the people around him and how other forklift operators are maneuvering their vehicles. IFMs software automatically detects safety violations (for example, cell phone use) and notifies warehouse managers so they can take immediate action. The main goals are to prevent accidents and increase efficiency. The mere knowledge that one of IFMs devices is watching, Gyongyosi claims, has had a huge effect.

If you think about a camera, it really is the richest sensor available to us today at a very interesting price point, he says. Because of smartphones, camera and image sensors have become incredibly inexpensive, yet we capture a lot of information. From an image, we might be able to infer 25 signals today, but six months from now well be able to infer 100 or 150 signals from that same image. The only difference is the software thats looking at the image. And thats why this is so compelling, because we can offer a very important core feature set today, but then over time all our systems are learning from each other. Every customer is able to benefit from every other customer that we bring on board because our systems start to see and learn more processes and detect more things that are important and relevant.

IFM is just one of countless AI innovators in a field thats hotter than ever and getting more so all the time. Heres a good indicator: Of the 9,100 patents received by IBM inventors in 2018, 1,600 (or nearly 18 percent) were AI-related. Heres another: Tesla founder and tech titan Elon Musk recently donated $10 million to fund ongoing research at the non-profit research company OpenAI a mere drop in the proverbial bucket if his $1 billion co-pledge in 2015 is any indication. And in 2017, Russian president Vladimir Putin told school children that Whoever becomes the leader in this sphere [AI] will become the ruler of the world. He then tossed his head back and laughed maniacally.

OK, that last thing is false. This, however, is not: After more than seven decades marked by hoopla and sporadic dormancy during a multi-wave evolutionary period that began with so-called knowledge engineering, progressed to model- and algorithm-based machine learning and is increasingly focused on perception, reasoning and generalization, AI has re-taken center stage as never before. And it wont cede the spotlight anytime soon.

Theres virtually no major industry modern AI more specifically, narrow AI, which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning hasnt already affected. Thats especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.

Some sectors are at the start of their AI journey, others are veteran travelers. Both have a long way to go. Regardless, the impact artificial intelligence is having on our present day lives is hard to ignore:

But those advances (and numerous others, including this crop of new ones) are only the beginning; theres much more to come more than anyone, even the most prescient prognosticators, can fathom.

I think anybody making assumptions about the capabilities of intelligent software capping out at some point are mistaken, says David Vandegrift, CTO and co-founder of the customer relationship management firm 4Degrees.

With companies spending nearly $20 billion collective dollars on AI products and services annually, tech giants like Google, Apple, Microsoft and Amazon spending billions to create those products and services, universities making AI a more prominent part of their respective curricula (MIT alone is dropping $1 billion on a new college devoted solely to computing, with an AI focus), and the U.S. Department of Defense upping its AI game, big things are bound to happen. Some of those developments are well on their way to being fully realized; some are merely theoretical and might remain so. All are disruptive, for better and potentially worse, and theres no downturn in sight.

Lots of industries go through this pattern of winter, winter, and then an eternal spring, former Google Brain leader and Baidu chief scientist Andrew Ng told ZDNet late last year. We may be in the eternal spring of AI.

During a lecture last fall at Northwestern University, AI guru Kai-Fu Lee championed AI technology and its forthcoming impact while also noting its side effects and limitations. Of the former, he warned:

The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacementThe simple question to ask is, How routine is a job? And that is how likely [it is] a job will be replaced by AI, because AI can, within the routine task, learn to optimize itself. And the more quantitative, the more objective the job isseparating things into bins, washing dishes, picking fruits and answering customer service callsthose are very much scripted tasks that are repetitive and routine in nature. In the matter of five, 10 or 15 years, they will be displaced by AI.

In the warehouses of online giant and AI powerhouse Amazon, which buzz with more than 100,000 robots, picking and packing functions are still performed by humans but that will change.

Lees opinion was recently echoed by Infosys president Mohit Joshi, who at this years Davos gathering told the New York Times, People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their workforce. Now theyre saying, Why cant we do it with 1 percent of the people we have?

On a more upbeat note, Lee stressed that todays AI is useless in two significant ways: it has no creativity and no capacity for compassion or love. Rather, its a tool to amplify human creativity. His solution? Those with jobs that involve repetitive or routine tasks must learn new skills so as not to be left by the wayside. Amazon even offers its employees money to train for jobs at other companies.

One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs, says Klara Nahrstedt, a computer science professor at the University of Illinois at UrbanaChampaign and director of the schools Coordinated Science Laboratory.

Shes concerned thats not happening widely or often enough. IFMs Gyongyosi is even more specific.

People need to learn about programming like they learn a new language, he says, and they need to do that as early as possible because it really is the future. In the future, if you dont know coding, you dont know programming, its only going to get more difficult.

Stay Updated on the Latest AI Trends

Sign up for free to get more AI stories like this.

And while many of those who are forced out of jobs by technology will find new ones, Vandegrift says, that wont happen overnight. As with Americas transition from an agricultural to an industrial economy during the Industrial Revolution, which played a big role in causing the Great Depression, people eventually got back on their feet. The short-term impact, however, was massive.

The transition between jobs going away and new ones [emerging], Vandegrift says, is not necessarily as painless as people like to think.

"In the future, if you dont know coding, you dont know programming, its only going to get more difficult.

Mike Mendelson, a learner experience designer for NVIDIA, is a different kind of educator than Nahrstedt. He works with developers who want to learn more about AI and apply that knowledge to their businesses.

If they understand what the technology is capable of and they understand the domain very well, they start to make connections and say, Maybe this is an AI problem, maybe thats an AI problem, he says. Thats more often the case than I have a specific problem I want to solve.

In Mendelsons view, some of the most intriguing AI research and experimentation that will have near-future ramifications is happening in two areas: reinforcement learning, which deals in rewards and punishment rather than labeled data; and generative adversarial networks (GAN for short) that allow computer algorithms to create rather than merely assess by pitting two nets against each other. The former is exemplified by the Go-playing prowess of Google DeepMinds Alpha Go Zero, the latter by original image or audio generation thats based on learning about a certain subject like celebrities or a particular type of music.

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Ideally and partly through the use of sophisticated sensors, cities will become less congested, less polluted and generally more livable. Inroads are already being made.

Once you predict something, you can prescribe certain policies and rules, Nahrstedt says. Such as sensors on cars that send data about traffic conditions could predict potential problems and optimize the flow of cars. This is not yet perfected by any means, she says. Its just in its infancy. But years down the road, it will play a really big role.

Of course, much has been made of the fact that AIs reliance on big data is already impacting privacy in a major way. Look no further than Cambridge Analyticas Facebook shenanigans or Amazons Alexa eavesdropping, two among many examples of tech gone wild. Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. In 2015, Apple CEO Tim Cook derided competitors Google and Facebook (surprise!) for greed-driven data mining.

Theyre gobbling up everything they can learn about you and trying to monetize it, he said in a 2015 speech. We think thats wrong.

Last fall, during a talk in Brussels, Belgium, Cook expounded on his concern.

Advancing AI by collecting huge personal profiles is laziness, not efficiency," he said. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound."

If implemented responsibly, AI can benefit society. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights."

Plenty of others agree. In a paper published recently by UK-based human rights and privacy groups Article 19 and Privacy International, anxiety about AI is reserved for its everyday functions rather than a cataclysmic shift like the advent of robot overlords.

If implemented responsibly, AI can benefit society, the authors write. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights. In particular, applications of these technologies frequently rely on the generation, collection, processing, and sharing of large amounts of data, both about individual and collective behavior. This data can be used to profile individuals and predict future behavior. While some of these uses, like spam filters or suggested items for online shopping, may seem benign, others can have more serious repercussions and may even pose unprecedented threats to the right to privacy and the right to freedom of expression and information (freedom of expression). The use of AI can also impact the exercise of a number of other rights, including the right to an effective remedy, the right to a fair trial, and the right to freedom from discrimination.

Speaking at Londons Westminster Abbey in late November of 2018, internationally renowned AI expert Stuart Russell joked (or not) about his formal agreement with journalists that I wont talk to them unless they agree not to put a Terminator robot in the article. His quip revealed an obvious contempt for Hollywood representations of far-future AI, which tend toward the overwrought and apocalyptic. What Russell referred to as human-level AI, also known as artificial general intelligence, has long been fodder for fantasy. But the chances of its being realized anytime soon, or at all, are pretty slim. The machines almost certainly wont rise (sorry, Dr. Russell) during the lifetime of anyone reading this story.

There are still major breakthroughs that have to happen before we reach anything that resembles human-level AI, Russell explained. One example is the ability to really understand the content of language so we can translate between languages using machines When humans do machine translation, they understand the content and then express it. And right now machines are not very good at understanding the content of language. If that goal is reached, we would have systems that could then read and understand everything the human race has ever written, and this is something that a human being can't do... Once we have that capability, you could then query all of human knowledge and it would be able to synthesize and integrate and answer questions that no human being has ever been able to answer because they haven't read and been able to put together and join the dots between things that have remained separate throughout history.

Thats a mouthful. And a mind full. On the subject of which, emulating the human brain is exceedingly difficult and yet another reason for AGIs still-hypothetical future. Longtime University of Michigan engineering and computer science professor John Laird has conducted research in the field for several decades.

The goal has always been to try to build what we call the cognitive architecture, what we think is innate to an intelligence system, he says of work thats largely inspired by human psychology. One of the things we know, for example, is the human brain is not really just a homogenous set of neurons. Theres a real structure in terms of different components, some of which are associated with knowledge about how to do things in the world.

Thats called procedural memory. Then theres knowledge based on general facts, a.k.a. semantic memory, as well as knowledge about previous experiences (or personal facts) thats called episodic memory. One of the projects at Lairds lab involves using natural language instructions to teach a robot simple games like Tic-Tac-Toe and puzzles. Those instructions typically involve a description of the goal, a rundown of legal moves and failure situations. The robot internalizes those directives and uses them to plan its actions. As ever, though, breakthroughs are slow to come slower, anyway, than Laird and his fellow researchers would like.

Every time we make progress, he says, we also get a new appreciation for how hard it is.

More than a few leading AI figures subscribe (some more hyperbolically than others) to a nightmare scenario that involves whats known as singularity, whereby superintelligent machines take over and permanently alter human existence through enslavement or eradication.

The late theoretical physicist Stephen Hawking famously postulated that if AI itself begins designing better AI than human programmers, the result could be machines whose intelligence exceeds ours by more than ours exceeds that of snails. Elon Musk believes and has for years warned that AGI is humanitys biggest existential threat. Efforts to bring it about, he has said, are like summoning the demon. He has even expressed concern that his pal, Google co-founder and Alphabet CEO Larry Page, could accidentally shepherd something evil into existence despite his best intentions. Say, for example, a fleet of artificial intelligence-enhanced robots capable of destroying mankind. (Musk, you might know, has a flair for the dramatic.) Even IFMs Gyongyosi, no alarmist when it comes to AI predictions, rules nothing out. At some point, he says, humans will no longer need to train systems; theyll learn and evolve on their own.

I dont think the methods we use currently in these areas will lead to machines that decide to kill us, he says. I think that maybe five or ten years from now, Ill have to reevaluate that statement because well have different methods available and different ways to go about these things.

While murderous machines may well remain fodder for fiction, many believe theyll supplant humans in various ways.

Last spring, Oxford Universitys Future of Humanity Institute published the results of an AI survey. Titled When Will AI Exceed Human Performance? Evidence from AI Experts, it contains estimates from 352 machine learning researchers about AIs evolution in years to come. There were lots of optimists in this group. By 2026, a median number of respondents said, machines will be capable of writing school essays; by 2027 self-driving trucks will render drivers unnecessary; by 2031 AI will outperform humans in the retail sector; by 2049 AI could be the next Stephen King and by 2053 the next Charlie Teo. The slightly jarring capper: by 2137, all human jobs will be automated. But what of humans themselves? Sipping umbrella drinks served by droids, no doubt.

Diego Klabjan, a professor at Northwestern University and founding director of the schools Master of Science in Analytics program, counts himself an AGI skeptic.

Currently, computers can handle a little more than 10,000 words, he explains. So, a few million neurons. But human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies I don't see that happening.

Klabjan also puts little stockin extreme scenarios the type involving, say, murderous cyborgs that turn the earth into asmoldering hellscape. Hes much more concerned with machines war robots, for instance being fed faulty incentives by nefarious humans. As MIT physics professors and leading AI researcher Max Tegmark put it in a 2018 TED Talk, The real threat from AI isnt malice, like in silly Hollywood movies, but competence AI accomplishing goals that just arent aligned with ours. Thats Lairds take, too.

I definitely dont see the scenario where something wakes up and decides it wants to take over the world, he says. I think thats science fiction and not the way its going to play out.

What Laird worries most about isnt evil AI, per se, but evil humans using AI as a sort of false force multiplier for things like bank robbery and credit card fraud, among many other crimes. And so, while hes often frustrated with the pace of progress, AIs slow burn may actually be a blessing.

Time to understand what were creating and how were going to incorporate it into society, Laird says, might be exactly what we need.

But no one knows for sure.

There are several major breakthroughs that have to occur, and those could come very quickly, Russell said during his Westminster talk. Referencing the rapid transformational effect of nuclear fission (atom splitting) by British physicist Ernest Rutherford in 1917, he added, Its very, very hard to predict when these conceptual breakthroughs are going to happen.

But whenever they do, if they do, he emphasized the importance of preparation. That means starting or continuing discussions about the ethical use of A.G.I. and whether it should be regulated. That means working to eliminate data bias, which has a corrupting effect on algorithms and is currently a fat fly in the AI ointment. That means working to invent and augment security measures capable of keeping the technology in check. And it means having the humility to realize that just because we can doesnt mean we should.

Our situation with technology is complicated, but the big picture is rather simple, Tegmark said during his TED Talk. Most AGI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history. It could enable brutal global dictatorship with unprecedented inequality, surveillance, suffering and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybodys better offthe poor are richer, the rich are richer, everybodys healthy and free to live out their dreams.

Read more here:
7 Ways An Artificial Intelligence Future Will Change The ...

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says – Nextgov

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

I dont know how you can have a black-box algorithm thats proprietary and then be able to deploy it and be able to go off and explain whats going on, said Martin Stanley, a senior technical advisor who leads the development of CISAs artificial intelligence strategy. I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for whats happening.

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses.

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor onfor both offensive and defensive cybersecurity maneuversbut the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or poisoned, the outcomes can be disastrous.

Changes to the data could be things that humans wouldnt necessarily recognize, but that computers do.

Weve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell, said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel.

And while its true that behind every AI algorithm is a human coder, the designs are becoming so complex, that youre looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be, Wolff says.

This makes for a threat vector where vulnerabilities are harder to detect until its too late.

With AI, theres much more potential for vulnerabilities to stay covert than with other threat vectors, Wolff said. As models become increasingly complex it can take longer to realize that something is wrong before theres a dramatic outcome.

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets.

We pick ones that are understandable and have low complexity, he said.

Among other things federal personnel need to be mindful of is who has access to the training data.

You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is OK, send us all your data, hows that going to work so we can train the algorithm? he said. Those are the kinds of concerns that we have to be able to address.

Were going to have to continuously demonstrate that we are using the data for the purpose that it was intended, he said, adding, Theres some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.

A crucial but very difficult element to establish is liability. Wolff said ideally, liability wouldbe connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability.

Thats important, she said, for answering the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.

But this is hard, even in the world of software development more broadly.

Making the connection is still very unresolved. Were still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it, she said. Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they cant be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that thats part of what were grappling with, an industry that for a very long time has had very strong protections from any liability.

View from the Commission

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagons Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisitions Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

Im sure youve heard of the [CMMC], theres a lot of thought going on, the question is the policing of it, she said. When you consider the proliferation of the code thats out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure, she said, In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.

She highlighted upfront a need for testing a verification but said it shouldnt be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

I would encourage industry to think about this from the standpoint of what tools would we needbecause theyre using themin the department, in the federal space, in the community, to give us transparency and verification, she said, so that we have a high confidence in the utility, in the data that were using and the AI algorithms that were building.

More here:
Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says - Nextgov