Archive for the ‘Alphago’ Category

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.

Opening

Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.

Closing

Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read this article:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University

12 shots at staying ahead of AI in the workplace – pharmaphorum

Oliver Stohlmanns Corporate Survival Hacks series draws on his experiences of working in local, regional, and global life sciences communications to offer some little tips for enjoying a big business career. In this update, he shares expectations on how artificial intelligence (AI) may impact our workplaces and what we may do to leverage this trend for the benefit of both people and business.

Regardless of where you are on the corporate ladder, whether you know it or not, your life is going to change; dramatically, fast.

Indications of what artificial intelligence (AI) is already able to do and how its broader application will change our work environment are mind-boggling. What well experience in the next five to ten years is a massive explosion of AI usage in nearly all areas of life.

The beginning of the beginning?

A few examples? Generating flawless text or images is no longer an issue of skill or knowledge. Most AI-generated results are so impressive that a number of people and professions are already impacted by this.

As a teacher or university lecturer, it hardly makes sense today to have students draft their own essays or academic papers. According to Nature, it has become impossible even for scientists to differentiate with certainty between AI-created and original abstracts.

At a recent marketing seminar I was involved in, not one of 36 business students was able to provide a superior and better structured answer than ChatGPT to the question, Please explain SWOT analysis. Try for yourself.

Authentic voice and imagery

In the US, the start-up DoNotPay was about to run a pilot in February in which AI would represent a client in a speeding case court hearing. The chatbot would run on a smartphone, listening to what was being said in court, before whispering instructions into the defendants earpiece on how to best answer the judges questions. The experiment got stopped at the last minute by state bar associations concerned about the robot lawyer practicing law without a license. However, if these objections can be resolved, this may be the way forward in many comparable settings. Its not a matter of AI capability.

If you cannot or do not wish to attend meetings in person, VALL-E is able to read any text in your voice and tonality, or anyone elses. All you need to do is submit a three-second original voice sample. Soon the human ear will not be able to differentiate between the authentic sound of a persons voice and AI imitations of it.

DALL-E2 is an AI system that can create realistic images and artwork in line with your exact specifications, from your description in natural language. The need for graphic designers, photographers, illustrators, and even classic painters will fade.

Shifting from the what to the how

In the future, the best speakers will be those able to authentically repeat what those little ear pods tell them with exceptional charisma, intonation, natural gestures, and facial expressions. Neither content nor expertise will be a bottleneck. An AI-enabled speaker will be able to talk about absolutely any subject on any level of expertise. And yes, theyll be able to answer any question, too, even the provocative ones.

The best business consultants, trainers, and leadership coaches will be those with outstanding social, didactic, and motivational skills. Professional education will continue to matter, but it will focus much more on supporting executives on how to run their business, team, and customer relations; not on transferring knowledge. Being an expert knowledgeable on the what will not suffice. Most consultants, trainers, and coaches will be replaced by social learning environments. Facilitators may guide customised knowledge acquisition, while coaches and consultants will largely focus on optimising executives acumen, personality, and other soft components of effective leadership.

More human in Human Resources

The best people managers will be those who naturally adopt and apply the latest intelligence on people management that their employers AI-powered HR function equips them with. Human touch will not be lacking. Itll be delivered in a personalised way allowing the manager to tailor their approach to different team members of diverse engagement drivers and needs. Data collection and evaluation will run fully automated in the background, providing the manager with individual strength assessments, goal recommendations, performance tracking, corrective interventions, and development recommendations customised to each team member while calibrating across large organisations in real time.

The best HR representatives will be those who lend these automated processes and decisions a trustworthy, fair, and human face. Decisions will be facilitated and employee conversations prepared flawlessly by AI systems running in the background. The number of real people employed in human resources will shrink. Those left, however, will primarily focus on interfacing with internal clients and employees. The quality of these interactions, and that of preparing materials and compelling scripts to enable powerful conversations, will materially increase.

Language creation and translation

The best writers will be Whoops, I started this sentence wrong: therell be no need for writers. Or very few, outstanding ones at best. Already today, AI-generated texts are of a quality, clarity, and artistic beauty that beats 80% of human professional writers. Try it out: ask ChatGPT to draft an introduction for the website of company Human Hips that designs and replaces human hip implants. See what happens.

I just made up that company name. If it existed, they may use the resulting draft for their website straight. Yes, it could be improved by a great writer, more details added reflecting the specialty offerings of that enterprise. However, AI is on track to producing superior texts compared to most human writers, based on minimal input and cost, and faster than anyone else could.

The best translators will be Sorry, got this wrong again: translators will disappear. AI already supplies great, and will deliver perfect, translations into any and all global languages in split seconds, for any length and complexity of written or spoken word. Roles that translate texts or simultaneously translate the spoken word will be a concept of the past.

Seizing the AI revolution

The best employees those who retain well-paid jobs and climb the career ladder will be those able to competently navigate the avalanche of AI-led and augmented applications. They can select the relevant ones to add business value and adapt key features to meet specific business and customer needs. Theyre able to utilise AI to achieve outcomes faster and more efficiently, at lower cost and better quality than whats imaginable today.

The best executives will be algorithm-based. Of course, its a scary prospect to remove thinking humans with deep background and long experience from the positions of power. However, just imagine how much better, faster, fairer, and more ethical fact-based decision-making could become once typical human flaws are removed from the equation. These may include ones individual values and beliefs, ideologies, biases, personal relationships, and interdependencies, including corruption and other temptations; plus cultural and institutional norms, value-systems, expectations, and the pressures typically resulting from those. Scary, but likely in the future.

The best politicians will be You get my drift!

But theres an upside - many, actually

I would be mistaken if I didnt at least briefly point out the phenomenally positive, life-enhancing, and sometimes life-saving opportunities AI brings to society, too.

Apart from GPS systems navigating us to destinations safely, faster, and more reliably, our cars are already equipped with lots of other AI-based safety features that serve to prevent accidents before they happen. An armada of sensors connected and communicating with smart control centres is constantly watching not only over the cars we use, but buses, trains, ships, planes, trucks, agricultural machinery, etc., to keep operations, passengers, and freight safe. They also make sure that buildings, roads, rail tracks, bridges, tunnels, airports, harbours, stations, wind turbines, and all other infrastructure is constantly monitored and gets maintained preventatively before fatigue, vibrations, climate, or other forces can lead to damage or disaster.

As much as I dont like the idea of machines taking over, they most certainly make safer drivers than I am. My future driverless car wont get distracted, nor will it become tired, and it will be able to detect nearing obstacles, stopping traffic, or the deer about to cross the road earlier than I could. In the same way, pilots have been using autopilots for years that cannot only keep planes stable in the air, but also take off and land them safely in the harshest weather conditions.

Human health: an AI beneficiary

In medicine, AI-augmented surgery can already operate more precisely than the human hand could, with trained physicians informing and supervising the process and intervening as needed. Implants are being precision-measured, designed to your individual specifications, and a unique product tailormade to provide an optimal, long-lasting fit. Thats not to mention the fast, minimally invasive precision-surgery that spares patients pain and time, while reducing hospital capacity and cost.

Innovative medical therapies will be designed, developed, and clinically trialled much faster driven by AI-led processes, and made available to the right patients, who benefit from treatment and who will have been pre-determined with the aid of biomarkers or other tests conducted by means of you guessed it AI at rocket speed and precision.

These are just examples. The fast-increasing use of AI will radically change the way we work and live. But it will also usher in a world of opportunities that we and future generations will greatly benefit from.

Buckle up!

However, in case you find the above scenarios unsettling: most do not even touch on the true potential of artificial intelligence. What weve been talking about, so far, is mostly the seamless automation of individual steps and processes so that results can be achieved faster, more efficiently, and more accurately than any human brain could.

Fasten your seatbelts for when true self-learning algorithms with the capacity and capability to continuously learn from errors and instantly apply their insights to improve approaches in real-time are ready for mass application.

For instance, DeepMinds AlphaGo system, who apologies: that famously defeated the worlds Go champion Le Se-dol in 2016. Three years later, the South Korean attributed his retirement from the complex board game to the rise of AI, saying that it was an entity that cannot be defeated.

Well, for a bit of hope, read this recent update on how the story continued with a comprehensive defeat of a top-ranked AI system in the same game. However, you may also notice even that human victory over AI was owed to yet more artificial intelligence support

Whichever way you look at the rise of AI, its diverse applications, future possibilities, or the potential need for regulation: its going to be a fast ride.

About the author

Oliver Stohlmann is a communications leader with more than 20 years experience of working at local, regional, and global levels for several of the worlds premier life sciences corporations. Most recently, he was Johnson & Johnsons global head of external innovation communication. He currently works for Exscientia plc and as an independent leadership coach, trainer, team-developer, and communications consultant.

Read the rest here:
12 shots at staying ahead of AI in the workplace - pharmaphorum

Hypotheses and Visions for an Intelligent World – Huawei

As we move towards an intelligent world, information sensing, connectivity, and computing are becoming key. The better knowledge and control of matter, phenomena, life, and energy that result from these technologies are also becoming increasingly important. This makes rethinking approaches to networks and computing critical in the coming years.

In terms of networks, about 75 years ago Claude Shannon proposed his theorems based on three hypotheses: discrete memoryless sources, classical electromagnetic fields, and simple propagation environments. But since then, the industry has continued to push the boundaries of his work.

In 1987, Jim Durnin discovered self-healing non-diffracting beams that could continue to propagate when encountering an obstruction.

In 1992, L. Allen et. al. postulated that the spin and orbital angular momentum of an electromagnetic field has infinite orthogonal quantum states along the same propagation direction, and each quantum state can have one Shannon capacity.

After AlphaGo emerged in 2016, people realized how well foundation models can be used to describe a world with prior knowledge. This means that much information is not discrete or memoryless.

With the large-scale deployment of 5G Massive MIMO in 2018, it has become possible to have multiple independent propagation channels in complex urban environments with tall buildings, boosting communications capacity.

These new phenomena, knowledge, and environments are helping us break away from the hypotheses that shaped Shannon theorems. With them, I believe we can achieve more than 100-fold improvement in network capabilities in the next decade.

In computing, intelligent applications are developing rapidly, and AI models in particular are likely to help solve the fragmentation problems that are currently holding AI application development back. This is driving an exponential growth in model size. Academia and industry have already begun exploring the use of AI in domains like software programming, scientific research, theorem verification, and theorem proving. With more powerful computing models, more abundant computing power, and higher-quality data, AI will be able to better serve social progress.

AI capabilities are improving rapidly, and so we need to consider how to ensure AI development progresses in a way that benefits all people and ensures that AI execution is accurate and efficient. In addition to ethics and governance, AI also faces three big challenges from a theoretical and technical perspective: AI goal definition, accuracy and adaptability, and efficiency.

The first challenge AI faces is that there is no agreed upon definition of its goals. What kind of intelligence do we need?

If there is no clear definition, it is difficult to ensure that the goals of AI and humanity will be aligned and to make reasonable measurements and classifications and scientific computations. Professor Adrian Bejan, a physicist at Duke University, summarizes more than 20 goals for intelligence in his book The Physics of Life, including understanding and cognitive ability, learning and adaptability, and abstract thinking and problem-solving ability. There are many schools of AI, which are poorly integrated. One important reason for this is there are no commonly agreed upon goals for AI.

The second challenge AI faces is accuracy and adaptability. Learning based on statistical rules extracted from big data often results in non-transparent processes, unstable results, and bias. For example, when recognizing a banana using statistical and correlation-based algorithms, an AI system can be easily affected by background combinations and tiny noises. If other pictures are put next to it, the banana may be recognized as an oven or a slug. These pictures can be easily recognized by people, but AI makes these mistakes and it is difficult to explain or debug them.

The third challenge for AI is efficiency. According to the 60th TOP500 published in 2022, the fastest supercomputer is Frontier, which can achieve 1,102 PFLOPS while using 21 million watts of energy. Human brains, in contrast, can deliver about 30 PFLOPS with just 20 watts. These numbers show that the human brain is about 30,000 to 100,000 times more energy efficient than a supercomputer.

In addition to energy efficiency, data efficiency is also a major challenge for AI. It is true that we can better understand the world by extracting statistical laws from big data. But can we find logic and generate concepts from small data, and abstract them into principles and rules?

We have come up with several hypotheses to address these three challenges:

Starting from these hypotheses, we can begin to take more practical steps to develop knowledge and intelligence.

At Huawei, our first vision is to combine systems engineering with AI to develop accurate, autonomous, and intelligent systems. In recent years, there has been a lot of research in academia about new AI architectures that go beyond transformers.

We can build upon these thoughts by focusing on three parts: perception and modeling, automatic knowledge generation, and solutions and actions. From there, we can develop more accurate, autonomous, and intelligent systems through multimodal perception fusion and modeling, as well as knowledge and data-driven decision-making.

Perception and modeling are about representations and abstractions of the external environment and ourselves. Automatic knowledge generation means systems will need to integrate the existing experience of humans into strategy models and evaluation functions to increase accuracy. Solutions can be directly deduced based on existing knowledge as well as internal and external information, or through trial-and-error and induction. We hope that these technologies will be incorporated into future autonomous systems, so that they can better support domains like autonomous driving networks, autonomous vehicles, and cloud services.

Our second vision is to create better computing models, architectures, and components to continuously improve the efficiency of intelligent computing. I once spoke with Fields Medalist Professor Laurent Lafforgue about whether invariant object recognition could be made more accurate and efficient by using geometric manifolds for object representation and computing in addition to pixels, which are now commonly used in visual and spatial computing.

In their book Neuronal Dynamics, co-authors Gerstner, Kistler, Naud, and Paninski at cole Polytechnique Fdrale de Lausanne (EPFL) explain the concept of functional columns in the cerebral cortex and the six-layer connections between these functional columns. It makes me wonder: Can such a shallow neural network be more efficient than a deep neural network?

A common bottleneck for today's AI computing is the memory wall. Reading, writing, and migrating data often takes 100-times more time than computing itself. So, can we possibly bypass conventional processors, instruction sets, buses, logic components, and memory components under von Neumann architecture, and redefine architectures and components based on advanced AI computing models instead?

Huawei has been exploring this idea by looking into the practical uses of AI. First, we have worked on "AI for Industry", which uses industry-specific large models to create more value. Industries face many challenges when it comes to AI application development. They need to invest a huge amount of manpower to label samples, find it difficult to maintain models, and lack the necessary capabilities in model generalization. Most simply they do not have the resources to do this.

To address these challenges, Huawei has developed L1 industry-specific large models based on its L0 large foundation models dedicated to computer vision, natural language processing, graph neural networks, and multi-modal interactions. These large models lower the barrier to AI development, improve model generalization, and address application fragmentation. The models are already being used to improve operational efficiency and safety in major industries like electric power, coal mining, transportation, and manufacturing.

Huawei's Aviation & Rail Business Unit, for example, is working with customers and partners in Hohhot, Wuhan, Xi'an, Shenzhen, and Hongkong to explore the digital transformation of urban rail, railways, and airports. This has improved operational safety and efficiency, as well as user experience and satisfaction. The Shenzhen Airport has realized smart stand allocation with the support of cloud, big data, and AI, reducing airside transfer bus passenger flow by 2.6 million every year. The airport has become a global benchmark in digital transformation.

"AI for Science" is another initiative that will be able to greatly empower scientific computing. One example of this in action is the Pangu meteorology model we developed using a new 3D transformer-based coding architecture for geographic information and a hierarchical time-domain aggregation method. With a prior knowledge of global meteorological phenomena, the Pangu model uses more accurate and efficient learning and reasoning to replace time series solutions of hyperscale partial differential equations using traditional scientific computing methods. The Pangu model can produce 1-hour to 7-day weather forecasts in just a few seconds, and its results are 20% more accurate than forecasts from the European Centre for Medium-Range Weather Forecasts.

AI can also support software programming. In addition to using AI to do traditional retrieval and recommendation in a large amount of existing code, Huawei is developing new model-driven and formal methods. This is especially important for large-scale parallel processing, where many tasks are intertwined and correlated. Huawei has developed a new approach called Vsync which realizes automatic verification and concurrent code optimization of operating system kernels, and improves performance without undermining reliability. The Linux Community once discovered a difficult memory barrier bug which took community experts more than two years to fix. With Huawei's Vsync method, however, it would have taken just 20 minutes to discover and fix the bug.

We have also been studying new computing models for automated theorem proving. Topos theory, for example, can be used to research category proving, congruence reasoning systems, and automated theorem derivation to improve the automation level of theorem provers. In doing this, we want to solve state explosion and automatic model abstraction problems and improve formal verification capabilities.

Finally, we are also exploring advanced computing components. We can use the remainder theorem to address conversion efficiency and overflow problems in real-world applications. We hope to implement basic addition and multiplication functions in chips and software to improve the efficiency of intelligent computing.

As we move towards the intelligent world, networks and computing are two key cornerstones that underpin our shift from narrow AI towards general-purpose AI and super AI. To get there, we will need to take three key steps. First, we will need to develop AI theories and technologies, as well as related ethics and governance, so that we can deliver ubiquitous intelligent connectivity and drive social progress. Second, we will need to continue pushing our cognitive limits to improve our ability to understand and control intelligence. Finally, we need to define the right goals and use the right approaches to guide AI development in a way that truly helps overcome human limitations, improve lives, create matter, control energy, and transcend time and space. This is how we will succeed in our adventure into the future.

The rest is here:
Hypotheses and Visions for an Intelligent World - Huawei

Cloud storage is the key to unlocking AI’s full potential for businesses – TechRadar

Artificial intelligence (opens in new tab) continues to make headlines for its potential to transform businesses across various industries, and has been widely embraced as a technology that can help companies unlock new opportunities, improve efficiency, and increase profitability. At its most basic level, AI does this by analyzing inputted information to create intelligent outputs. The AI industry is currently valued at over $136 billion and is predicted to grow over 13 times in the next 7 years.

At its core, AI relies on data (opens in new tab) - specifically, large volumes of high-quality data to train machine learning algorithms. These algorithms analyze inputted information to identify patterns that can be used to make predictions, automate processes, or perform other tasks. Accordingly, while the power of AI applications (opens in new tab) across industries is immense, the benefits are entirely based on the information available to these systems.

Given that AI is so reliant on data, where this data is stored becomes an important concern. Businesses need to know that they can securely store a large volume of data and that this data is easily accessible for the AI systems to use. Moreover, for businesses, proprietary data for custom AI applications must be kept safe. With this in mind, the best way for businesses to keep large quantities of easily accessible data safely is by keeping at least one copy of it in the cloud.

AI systems need high volumes of data on hand to operate optimally. These systems have the capacity to improve their performance and enhance their learning speed as the amount of available data increases. For example, Google DeepMind's AlphaGo Zero had to play 20 million games against itself to train its AI to a superhuman level of play, demonstrating just how much data is needed for AI to work at its full potential.

Given that the success of AI implementation hinges on the amount of data AI systems can access, companies must thoughtfully consider their data storage options, whether that be on-premise, in the cloud (opens in new tab), or in a hybrid cloud system - and how that impacts their AI implementation.

Storing data on local hardware owned and managed by an enterprise, known as on-premises data storage, requires securing storage resources and maintaining systems. However, scaling in this way is difficult and costly compared to cloud-based storage, which is better equipped to handle increasing data volumes. On-premise scalability is also limited by ageing hardware and software, which often come with discontinued support plans and retired products. Therefore, for better scalability and security, the adoption of cloud storage services is becoming increasingly crucial for companies as they develop "AI first" strategies.

Social Links Navigation

David Friend is the co-founder and CEO of Wasabi.

Similar to the way businesses need to store a lot of data for AI, they also need to keep proprietary data should they wish to customize their AI to meet their organization's specific needs. For instance, an HR manager may be able to use AI to analyze years worth of company-wide survey data in minutes and predict employee responses to different kinds of company news, like new policies or team switchups. Similarly, an AI system could analyze company growth and economic data to inform major business decisions.

Incorporating proprietary data into an AI system improves the accuracy and relevance of insights leading to better decision-making and business outcomes. Customising AI applications using proprietary data can give businesses a competitive edge, however should they choose to take advantage of customised AI through proprietary data, its important that this data is stored safely.

Unfortunately, the rise of AI systems brings with it a host of new cybersecurity risks and the number and cost of cybersecurity attacks is expected to surge in the next five years, rising from $8.44 trillion in 2022 to $23.84 trillion by 2027. Particularly when storing critical company data, its key that AI systems are well-protected against ransomware attacks.

An important security advantage cloud has over on-premise solutions is that cloud infrastructure is separated from user workstations, bearing in mind hackers most commonly access company networks through phishing and emails (opens in new tab). Accordingly, having multiple copies of data with at least one version stored in the cloud is key to keeping company data safe and not compromising any critical AI systems.

The best way to protect against threats that may compromise the primary data copy is to keep a second, immutable copy of the AI system data. Immutable storage is a cloud storage feature that provides extra security by preventing data modification or deletion. Combined with comprehensive backup strategies, cloud storage (opens in new tab) providers offer high data security by storing immutable backups that can be retrieved if original data is compromised or deleted, ensuring availability, and avoiding loss of critical data.

For businesses, the value of AI is in its convenience and potential cost savings as it takes on tasks that would have previously taken hours of employee time and energy. By embracing cloud storage solutions for the reasons set out above, businesses can unleash the full power of AI for success.

We've featured the best productivity tools. (opens in new tab)

The rest is here:
Cloud storage is the key to unlocking AI's full potential for businesses - TechRadar

The Quantum Frontier: Disrupting AI and Igniting a Patent Race – Lexology

The contemporary computer processor at only half the size of a penny possesses the extraordinary capacity to carry out 11 trillion operations per second, with the assistance of an impressive assembly of 16 billion transistors.[1] This feat starkly contrasts the early days of transistor-based machines, such as the Manchester Transistor Computer, which had an estimated 100,000 operations per second, using 92 transistors and having a dimension of a large refrigerator. For comparison, while the Manchester Transistor Computer could take several seconds or minutes to calculate the sum of two large numbers, the Apple M1 chip can calculate it almost instantly. Such a rapid acceleration of processing capabilities and device miniaturization is attributable to the empirical observation known as Moores Law, named after the late Gordon Moore, the co-founder of Intel. Moores Law posits that the number of transistors integrated into a circuit is poised to double approximately every two years.[2]

In their development, these powerful processors have paved the way for advancements in diverse domains, including the disruptive field of artificial intelligence (AI). Nevertheless, as we confront the boundaries of Moores Law due to the physical limits of transistor miniaturization,[3] the horizons of the field of computing are extended into the enigmatic sphere of quantum physics the branch of physics that studies the behavior of matter and energy at the atomic and subatomic scales. It is within this realm that the prospect of quantum computing arises, offering immense potential for exponential growth in computational performance and speed, thereby heralding a transformative era in AI.

In this article, we scrutinize the captivating universe of quantum computing and its prospective implications on the development of AI and examine the legal measures adopted by leading tech companies to protect their innovations within this rapidly advancing field, particularly through patent law.

Qubits: The Building Blocks of Quantum Computing

In classical computing, the storage and computation of information are entrusted to binary bits, which assume either a 0 or 1 value. For example, a classical computer can have a specialized storage device called a register that can store a specific number at a time using bits. Each bit is like a slot that can be either empty (0) or occupied (1), and together they can represent numbers, such as the number 2 (with a binary representation of 010). In contrast, quantum computing harnesses the potential of quantum bits (infinitesimal particles, such as electrons or photons, defined by their respective quantum properties, including spin or polarization), commonly referred to as qubits.

Distinct from their classical counterparts, qubits can coexist in a superposition of states, signifying their capacity to represent both 0 and 1 simultaneously. This advantage means that, unlike bits with slots that are either empty or occupied, each qubit can be both empty and occupied at the same time, allowing each register to represent multiple numbers concurrently. While a bit register can only represent the number 2 (010), a qubit register can represent both the numbers 2 and 4 (010 and 100) simultaneously.

This superposition of states enables the parallel processing of information since multiple numbers in a qubit register can be processed at one time. For example, a classical computer may use two different bit registers to first add the number 2 to the number 4 (010 +100) and then add the number 4 to the number 1 (100+001), performing the calculations one after the other. In contrast, qubit registers, since they can hold multiple numbers at once, can perform both operationsadding the number 2 to the number 4 (010 + 100) and adding the number 4 to the number 1 (100 + 001)simultaneously.

Moreover, qubits employ the singular characteristics of entanglement and interference to execute intricate computations with a level of efficiency unattainable by classical computers. For instance, entanglement facilitates instant communication and coordination, which increases computational efficiency. At the same time, interference involves performing calculations on multiple possibilities at once and adjusting probability amplitudes to guide the quantum system toward the optimal solution. Collectively, these attributes equip quantum computers with the ability to confront challenges that would otherwise remain insurmountable for conventional computing systems, thereby radically disrupting the field of computing and every field that depends on it.

Quantum Computing

Quantum computing embodies a transformative leap for AI, providing the capacity to process large data sets and complex algorithms at unprecedented speeds. This transformative technology has far-reaching implications in fields like cryptography,[4] drug discovery,[5] financial modeling,[6] and numerous other disciplines, as it offers unparalleled computational power and efficacy. For example, a classical computer using a General Number Field Sieve (GNFS) algorithm might take several months or even years to factorize a 2048-bit number. In contrast, a quantum computer using Shors algorithm (a quantum algorithm) could potentially accomplish this task in a matter of hours or days. This capability can be used to break the widely used RSA public key encryption system, which would take conventional computers tens or hundreds of millions of years to break, jeopardizing the security of encrypted data, communications, and transactions across industries such as finance, healthcare, and government. Leveraging the unique properties of qubitsincluding superposition, entanglement, and interference quantum computers are equipped to process vast amounts of information in parallel. This capability enables them to address intricate problems and undertake calculations at velocities that, in certain but not all cases,[7] surpass those of classical computers by orders of magnitude.

The augmented computational capacity of quantum computing is promising to significantly disrupt various AI domains, encompassing quantum machine learning, natural language processing (NLP), and optimization quandaries. For instance, quantum algorithms can expedite the training of machine learning models by processing extensive datasets with greater efficiency, enhancing performance, and accelerating model development. Furthermore, quantum-boosted natural language processing algorithms may yield more precise language translation, sentiment analysis, and information extraction, fundamentally altering how we engage with technology.

Patent Applications Related to Quantum Computers

While quantum computers remain in their nascent phase, to date, the United States Patent and Trademark Office has received more than 6,000 applications directed to quantum computers, with over 1,800 applications being granted a United States patent. Among these applications and patents, IBM emerges as the preeminent leader, trailed closely by various companies, including Microsoft, Google, and Intel, which are recognized as significant contributors to the field of AI. For instance, Microsoft is a major investor in OpenAI (the developer of ChatGPT) and has developed Azure AI (a suite of AI services and tools for implementing AI into applications or services) and is integrating ChatGPT into various Microsoft products like Bing and Microsoft 365 Copilot. Similarly, Google has created AI breakthroughs such as AlphaGo (AI that defeated the world champion of the board game Go), hardware like tensor processing units (TPUs) (for accelerating machine learning and deep learning tasks), and has released its own chatbot called Bard (also known as LaMDA).

Patents Covering Quantum Computing

The domain of quantum computing is progressing at a remarkable pace, as current research seeks to refine hardware, create error correction methodologies, and investigate novel algorithms and applications. IBM and Microsoft stand at the forefront of this R&D landscape in quantum computing. Both enterprises have strategically harnessed their research findings to secure early patents encompassing quantum computers. Notwithstanding, this initial phase may merely represent the inception of a competitive endeavor to obtain patents in this rapidly evolving field. A few noteworthy and recent United States patents that have been granted thus far include:

Conclusion

Quantum computing signifies a monumental leap forward for AI, offering unparalleled computational strength and efficiency. As we approach the limits of Moores Law, the future of AI is contingent upon harnessing qubits distinctive properties, such as superposition, entanglement, and interference. The cultivation of quantum machine learning, along with its applications in an array of AI domains, including advanced machine learning, NLP, and optimization, portends a revolution in how we address complex challenges and engage with technology.

Prominent tech companies like IBM and Microsoft have demonstrated their commitment to this burgeoning field through investments and the construction of patent portfolios that encompass this technology. The evident significance of quantum computing in shaping the future of AI suggests that we may be witnessing the onset of a competitive patent race within the sphere of quantum computing.

See the original post here:
The Quantum Frontier: Disrupting AI and Igniting a Patent Race - Lexology