Archive for the ‘Artificial Intelligence’ Category

Governor, lawmakers are already planning big revisions to Colorado’s first-in-the nation artificial intelligence law – The Colorado Sun

Four weeks after a contentious Colorado bill regulating artificial intelligence systems to prevent harm to consumers was signed into law, the governor, attorney general and lawmakers are already vowing to revise the statute at the request of business leaders.

Discussions about changing the law began earlier this month after state officials heard an outcry from about 200 prominent technology company executives and venture capitalists about Senate Bill 205.

The plan to take another look at the law isnt entirely surprising. Polis had reservations about the bill, but signed it anyway because he said there was time to change it before it went into effect in 2026.

Im certainly encouraged by the fact that the beginning date for provisions are in 2026, Gov. Jared Polis told reporters after the legislative session ended May 8. I am confident that will leave ample time for any improvements that need to be made prior to it becoming effective.

Changes to the law cant be made by the legislature until the General Assembly reconvenes in January for the 2025 lawmaking term unless the governor or legislature calls a special session, which is highly unlikely.

In a letter Thursday to innovators, consumers, and all those interested in the AI space, Polis, Attorney General Phil Weiser and Senate Majority Leader Robert Rodriguez, D-Denver, acknowledged that the recently passed legislation needed additional clarity and improvements. Rodriguez was one of the main sponsors of the bill.

Starting today, in the lead up to the 2025 legislative session and well before the February 2026 deadline for implementation of the law, at the governor and legislative leaderships direction, state and legislative leaders will engage in a process to revise the new law, and minimize unintended consequences associated with its implementation, the letter says.

Denver Mayor Mike Johnston added his signature after this story first published.

The letter goes on to spell out what parts the law must be addressed, including defining what high-risk systems are and focusing regulation on developers of those high-risk systems and not the smaller companies that use third-party AI software. (If a company were using something like ChatGPT and its developer OpenAI made changes, there is confusion about whether Colorado law would require the local company to reassess its compliance.)

Other improvements the governor, attorney general and lawmakers are promising to make include requiring enforcement by the attorney general to happen after the fact instead of proactive disclosure, and clarifying that consumers have a right to appeal only to the attorney general, though they would also bring up any discrimination matters to the Colorado Civil Rights Commission.

Tech leaders complained about the prohibitive language of the new law and how it was already putting a black eye on Colorado for companies looking to expand in the state.

The letter from Polis, the attorney general and Rodriguez addressed that reality, saying that since the law was signed many of our home-grown businesses have highlighted the risk that an overly broad definition of AI, coupled with proactive disclosure requirements, could inadvertently impose prohibitively high costs on them, resulting in barriers to growth and product development, job losses and a diminished capacity to raise capital.

Dan Caruso, head of Caruso Ventures and founding CEO of telecom Zayo Group, was one of 200 names on the letter to Polis from tech industry leaders.

Caruso said he learned about the AI bill after it became law and only because he immediately heard from investors and tech companies confused about the ramifications to their businesses.

The way the law is written, he said, a grocery store that uses AI at the cash register to scan and add up merchandise could be subject to new reporting requirements even if they had nothing to do with the AI inside.

But his other problem is that tech startups dabbling in AI may feel theres an added administrative burden to developing technology in Colorado.

We certainly agree with the intent of trying to protect the consumer, but in the process you cut off a bunch of investment into Colorado and youre going to be hurting all the consumers in Colorado because we need tech jobs. We need our innovation economy. Thats what makes us thrive, Caruso said. By rushing ahead on the AI bill without fully understanding the implications, we kind of put a lot of the innovation economy into jeopardy. So we needed to work with them to correct the broadness of certain provisions of the bill.

Caruso said he and other tech leaders hope to participate in the process to revise the bill to prepare an amended version for the next legislative session.

That letter is the first step of the process. Not the last step. We still have to get to the step where changes are made early next year, Caruso said. But we need to reassure investors that Colorados still is a great place to invest for innovation.

Other notable names on the industry letter included Bryan Leach, CEO of the consumer app developer Ibotta; forme DaVita CEO Kent Thiry; Brad Feld, a venture capitalist at The Foundry Group; and David Cohen, who cofounded and is CEO of Techstars, which he started with Feld and Polis.

Rodriguez, who didnt respond Friday to a request for comment, said during a legislative hearing on the measure that all that were asking for companies to do (is put) in a place a notice to consumers, (perform) risk assessments on their tools and have an accountability report when something goes wrong that results in discrimination, thats what this bill does.

But AI developers opposed the bill from the start because there were concerns that even small changes at the development stage would discourage innovation by startups and AI-adjacent companies. Consumer advocates, however, felt the bill did not go far enough because AI-based discrimination was already occurring, with cases involving background checks and resume screening, and adjustments to auto insurance premiums.

The new letter from Polis and other elected officials was disappointing, said Matt Scherer, senior policy counsel for the Center for Democracy and Technology, a nonprofit that advances civil rights and liberties, said in an email on Friday. He said these changes were proposed before a taskforce that includes labor and consumer group representation has met.

Labor and consumer groups will strongly oppose those changes, Scherer said. The changes they are proposing would completely neuter the law, which is, of course, the objective of tech industry and other business pressure groups who have been spreading misinformation and fear-mongering about this bill ever since the sponsors made a few modest changes to strengthen what was a largely industry-crafted bill.

Eric Maruyama, a spokesman for the governors office said in an email that Gov. Polis is proud that Colorado is leading the way in the innovative sectors of tomorrow.

The governor is grateful for and shares Sen. Rodriguezs commitment to ensuring that Coloradans are protected from bias and discrimination in AI and is focused on ensuring that state standards support consumers and Colorados innovation economy, Maruyama said. Gov. Polis looks forward to working with leaders and stakeholders to help grow Colorados AI sector.

Colorado Sun staff writer Jesse Paul contributed to this report.

This story has been updated to add additional comments.

Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

Read the original post:
Governor, lawmakers are already planning big revisions to Colorado's first-in-the nation artificial intelligence law - The Colorado Sun

The effects of artificial intelligence on the future of humanity (by Pope Francis) – ZENIT

(ZENIT News / Vatican City-Apulia, Italy, 06.14.2024).- For the first time in history, a Pope participated in a G7 summit, a meeting attended by the leaders of the seven most industrialized economies in the world, along with some guests invited by the current presiding president. At the invitation of President Giorgia Meloni, Pope Francis attended the meeting. Below, we offer the full text of the Popes speech. Pope Francis read a shorter version of this same speech earlier in the afternoon on Friday, June 14.

***

Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have skill and understanding and knowledge in every craft (Ex35:31).[1]Science and technology are therefore brilliant products of the creative potential of human beings.[2]

Indeed,artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics.It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings.[3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard,we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use.[4]

After all, we cannot doubt thatthe advent of artificial intelligence represents a true cognitive-industrial revolution, which will contribute to the creation of a new social system characterised by complex epochal transformations.For example, artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research and the possibility of giving demanding and arduous work to machines. Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a throwaway culture be preferred to a culture of encounter.

The significance of these complex transformations is clearly linked to the rapid technological development of artificial intelligence itself.

It is precisely this powerful technological progress that makes artificial intelligence at the same timean exciting and fearsome tool,and demands a reflection that is up to the challenge it presents.

In this regard,perhaps we could start from the observation that artificial intelligence is above all elsea tool. And it goes without saying that the benefits or harm it will bring will depend on its use.

This is surely the case, for it has been this way with every tool fashioned by human beings since the dawn of time.

Our ability to fashion tools, in a quantity and complexity that is unparalleled among living things, speaks of atechno-human condition: human beings have always maintained a relationship with the environment mediated by the tools they gradually produced. It is not possible to separate the history of men and women and of civilization from the history of these tools. Some have wanted to read into this a kind of shortcoming, a deficit, within human beings, as if, because of this deficiency, they were forced to create technology.[5]A careful and objective view actually shows us the opposite.We experience a state of outwardness with respect to our biological being: we are beings inclined toward what lies outside-of-us, indeed we are radically open to the beyond. Our openness to others and to God originates from this reality, as does the creative potential of our intelligence with regard to culture and beauty. Ultimately, our technical capacity also stems from this fact. Technology, then, is a sign of our orientation towards the future.

The use of our tools, however, is not always directed solely to the good. Even if human beings feel within themselves a call to the beyond, and to knowledge as an instrument of good for the service of our brothers and sisters and ourcommon home(cf.Gaudium et Spes, 16), this does not always happen.Due to its radical freedom, humanity has not infrequently corrupted the purposes of its being, turning into an enemy of itself and of the planet.[6]The same fate may befall technological tools. Only if their true purpose of serving humanity is ensured, will such tools reveal not only the unique grandeur and dignity of men and women, but also the command they have received to till and keep(cf.Gen2:15) the planet and all its inhabitants. To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility. This means speaking about ethics.

In fact, when our ancestors sharpened flint stones to make knives, they used them both to cut hides for clothing and to kill each other. The same could be said of other more advanced technologies, such as the energy produced by the fusion of atoms, as occurs within the Sun, which could be used to produce clean, renewable energy or to reduce our planet to a pile of ashes.

Artificial intelligence, however, is a still more complex tool. I would almost say that we are dealing with a toolsui generis. While the use of a simple tool (like a knife) is under the control of the person who uses it and its use for the good depends only on that person, artificial intelligence, on the other hand, can autonomously adapt to the task assigned to it and, if designed this way, can make choices independent of the person in order to achieve the intended goal.[7]

It should always be remembered thata machine can, in some ways and by these new methods, produce algorithmic choices. The machine makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences.

Human beings, however, not only choose, but in their hearts are capable of deciding. A decision is what we might call a more strategic element of a choice and demands a practical evaluation. At times, frequently amid the difficult task of governing, we are called upon to make decisions that have consequences for many people. In this regard, human reflection has always spoken of wisdom, thephronesisof Greek philosophy and, at least in part, the wisdom of Sacred Scripture.Faced with the marvels of machines, which seem to know how to choose independently, we should be very clear that decision-making, even when we are confronted with its sometimes dramatic and urgent aspects, must always be left to the human person. We would condemn humanity to a future without hope if we took away peoples ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines. We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.

Precisely in this regard, allow me to insist:in light of the tragedy that is armed conflict, it is urgent to reconsider the development and use of devices like the so-called lethal autonomous weapons and ultimately ban their use. This starts from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being.

It must be added, moreover, that the good use, at least of advanced forms of artificial intelligence, will not be fully under the control of either the users or the programmers who defined their original purposes at the time they were designed. This is all the more true because it is highly likely that, in the not-too-distant future, artificial intelligence programs will be able to communicate directly with each other to improve their performance. And if, in the past, men and women who fashioned simple tools saw their lives shaped by them the knife enabled them to survive the cold but also to develop the art of warfare now that human beings have fashioned complex tools they will see their lives shaped by them all the more.[8]

The basic mechanism of artificial intelligence

I would like now briefly to address the complexity of artificial intelligence.Essentially, artificial intelligence is a tool designed for problem solving. It works by means of a logical chaining of algebraic operations, carried out on categories of data. These are then compared in order to discover correlations, thereby improving their statistical value. This takes place thanks to a process of self-learning, based on the search for further data and the self-modification of its calculation processes.

Artificial intelligence is designed in this way in order to solve specific problems. Yet, for those who use it, there is often an irresistible temptation to draw general, or even anthropological, deductions from the specific solutions it offers.

An important example of this is the use of programs designed to help judges in deciding whether to grant home-confinement to inmates serving a prison sentence. In this case, artificial intelligence is asked to predict the likelihood of a prisoner committing the same crime(s) again. It does so based on predetermined categories (type of offence, behaviour in prison, psychological assessment, and others), thus allowing artificial intelligence to have access to categories of data relating to the prisoners private life (ethnic origin, educational attainment, credit rating, and others). The use of such a methodology which sometimes risksde factodelegating to a machine the last word concerning a persons future may implicitly incorporate prejudices inherent in the categories of data used by artificial intelligence.

Being classified as part of a certain ethnic group, or simply having committed a minor offence years earlier (for example, not having paid a parking fine) will actually influence the decision as to whether or not to grant home-confinement. In reality, however, human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

It should also be noted that the use of applications similar to the one I have just mentioned will be used ever more frequently due to the fact that artificial intelligence programs will be increasingly equipped with the capacity to interact directly (chatbots) with human beings, holding conversations and establishing close relationships with them. These interactions may end up being, more often than not, pleasant and reassuring, since these artificial intelligence programs will be designed to learn to respond, in a personalised way, to the physical and psychological needs of human beings.

It is a frequent and serious mistake to forget that artificial intelligence is not another human being, and that it cannot propose general principles. This error stems either from the profound need of human beings to find a stable form of companionship, or from a subconscious assumption, namely the assumption that observations obtained by means of a calculating mechanism are endowed with the qualities of unquestionable certainty and unquestionable universality.

This assumption, however, is far-fetched, as can be seen by an examination of the inherent limitations of computation itself. Artificial intelligence uses algebraic operations that are carried out in a logical sequence (for example, if the value of X is greater than that of Y, multiply X by Y; otherwise divide X by Y). This method of calculation the so-called algorithm is neither objective nor neutral.[9]Moreover, since it is based on algebra, it can only examine realities formalised in numerical terms.[10]

Nor should it be forgotten thatalgorithms designed to solve highly complex problems are so sophisticated that it is difficult for programmers themselves to understand exactly how they arrive at their results. This tendency towards sophistication is likely to accelerate considerably with the introduction of quantum computers that will operate not with binary circuits (semiconductors or microchips) but according to the highly complex laws of quantum physics. Indeed, the continuous introduction of increasingly high-performance microchips has already become one of the reasons for the dominant use of artificial intelligence by those few nations equipped in this regard.

Whether sophisticated or not, the quality of the answers that artificial intelligence programs provide ultimately depends on the data they use and how they are structured.

Finally, I would like to indicate one last area in which the complexity of the mechanism of so-called Generative Artificial Intelligence clearly emerges.Today, no one doubts that there are magnificent tools available for accessing knowledge, which even allow for self-learning and self-tutoring in a myriad of fields. Many of us have been impressed by the easily available online applications for composing a text or producing an image on any theme or subject. Students are especially attracted to this, but make disproportionate use of it when they have to prepare papers.

Students are often much better prepared for, and more familiar with, using artificial intelligence than their teachers. Yet they forget that, strictly speaking, so-called generative artificial intelligence is not really generative.Instead, it searches big data for information and puts it together in the style required of it. It does not develop new analyses or concepts, but repeats those that it finds, giving them an appealing form. Then, the more it finds a repeated notion or hypothesis, the more it considers it legitimate and valid.Rather than being generative, then, it is instead reinforcing in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions.

In this way, it not only runs the risk of legitimising fake news and strengthening a dominant cultures advantage, but, in short, it also undermines the educational process itself. Education should provide students with the possibility of authentic reflection, yet it runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.[11]

Putting the dignity of the human person back at the centre, in light of a shared ethical proposal

A more general observation should now be added to what we have already said.The season of technological innovation in which we are currently living is accompanied by a particular and unprecedented social situation in which it is increasingly difficult to find agreement on the major issues concerning social life. Even in communities characterised by a certain cultural continuity, heated debates and arguments often arise, making it difficult to produce shared reflections and political solutions aimed at seeking what is good and just.

Thus aside from the complexity of legitimate points of view found within the human family, there is also a factoremerging that seems to characterise the above-mentioned social situation, namely, a loss, or at least an eclipse, of the sense of what is human and an apparent reduction in the significance of the concept of human dignity.[12]Indeed, we seem to be losing the value and profound meaning of one of the fundamental concepts of the West: that of the human person. Thus, at a time when artificial intelligence programs are examining human beings and their actions, it is precisely theethosconcerning the understanding of the value and dignity of the human person that is most at risk in the implementation and development of these systems. Indeed, we must remember that no innovation is neutral. Technology is born for a purpose and, in its impact on human society, always represents a form of order in social relations and an arrangement of power, thus enabling certain people to perform specific actions while preventing others from performing different ones. In a more or less explicit way, this constitutive power dimension of technology always includes the worldview of those who invented and developed it.

This likewise applies to artificial intelligence programs. In order for them to be instruments for building up the good and a better tomorrow, they must always be aimed at the good of every human being. They must have an ethical inspiration.

Moreover, an ethical decision is one that takes into account not only an actions outcomes but also the values at stake and the duties that derive from those values. That is why I welcomed both the 2020 signing in Rome of theRome Call for AI Ethics,[13]and its support for that type of ethical moderation of algorithms and artificial intelligence programs that I call algor-ethics.[14]In a pluralistic and global context, where we see different sensitivities and multiple hierarchies in the scales of values, it might seem difficult to find a single hierarchy of values. Yet, in ethical analysis, we can also make use of other types of tools: if we struggle to define a single set of global values, we can, however, find shared principles with which to address and resolve dilemmas or conflicts regarding how to live.

This is why theRome Callwas born:with the term algor-ethics, a series of principles are condensed into a global and pluralistic platform that is capable of finding support from cultures, religions, international organizations and major corporations, which are key players in this development.

The politics that is needed

We cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models. The technological paradigm embodied in artificial intelligence runs the risk, then, of becoming a far more dangerous paradigm, which I have already identified as the technocratic paradigm.[15]We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a paradigm, but rather, we must make artificial intelligence a bulwark against its expansion.

This is precisely where political action is urgently needed. The EncyclicalFratelli Tuttireminds us that for many people today, politics is a distasteful word, often due to the mistakes, corruption and inefficiency of some politicians. There are also attempts to discredit politics, to replace it with economics or to twist it to one ideology or another. Yet can our world function without politics?Can there be an effective process of growth towards universal fraternity and social peace without a sound political life?.[16]

Our answer to these questions is:No! Politics is necessary! I want to reiterate in this moment that in the face of many petty forms of politics focused on immediate interests [] true statecraft is manifest when, in difficult times, we uphold high principles and think of the long-term common good. Political powers do not find it easy to assume this duty in the work of nation-building (Laudato Si, 178), much less in forging a common project for the human family, now and in the future.[17]

Esteemed ladies and gentlemen!

My reflection on the effects of artificial intelligence on humanity leads us to consider the importance of healthy politics so that we can look to our future with hope and confidence. I have written previously that global society is suffering from grave structural deficiencies that cannot be resolved by piecemeal solutions or quick fixes. Much needs to change, through fundamental reform and major renewal. Only a healthy politics, involving the most diverse sectors and skills, is capable of overseeing this process. An economy that is an integral part of a political, social, cultural and popular programme directed to the common good could pave the way for different possibilities which do not involve stifling human creativity and its ideals of progress, but rather directing that energy along new channels(Laudato Si, 191).[18]

This is precisely the situation with artificial intelligence.It is up to everyone to make good use of it but the onus is on politics to create the conditions for such good use to be possible and fruitful.

Thank you.

Notes:

___________________________________________________

Thank you for reading our content. If you would like to receive ZENITs daily e-mail news, you can subscribe for free throughthis link.

See the original post here:
The effects of artificial intelligence on the future of humanity (by Pope Francis) - ZENIT

Ad spending is climbing, thanks to tireless consumers and AI – Marketplace

Global spending on advertising is likely to pass the trillion-dollar mark for the first time next year, according to media agency GroupM.

That would represent an increase of nearly 8% over this year and it means the ad industry will cross the threshold a year earlier than GroupM initially expected.

And that total, by the way? It does not include any of the money being poured into election advertising here in the United States.

The increase in ad spending is here despite high interest rates and consumers with dwindling savings and rising credit card debt indications that consumers arent exactly champing at the bit to buy stuff.

GroupMs previous prediction reflected high interest rates, which usually slows down consumer spending. But, we didnt see that play out to the extent that we sort of expected over the first quarter or half of 2024, said Kate Scott-Dawkins with GroupM.

She said theres another factor forcing spending upward: the artificial intelligence boom, of course. The report says AI could inform more than 94% of ad spending before the end of the decade.

Elea McDonnell Feit, a marketing professor at Drexel University, said AI is increasingly being used to help advertisers find the right customer at the right time, and place that ad in the right content.

And so advertisers are willing to spend more, because AI could make every dollar they spend more effective.

It can also customize any kind of ad, from static images to TikTok videos, said Bobby Zhou, a marketing professor at the University of Marylands Robert H. Smith School of Business.

The level of micro-targeting, the ads that you see, the ad copy that you see will be substantially different from the ad copy that I see, Bobby sees, he said.

So even if Bobby and I are shown the same running shoes, Ill see them in my favorite color, with an explanation of why theyd be great for someone in my neighborhood living my lifestyle.

That is the power of generative AI, and its already happening, Zhou said.

GroupM said the next big question is whether all ofthis AI-generated content will be as effective at getting people to buy stuff as human-made content already is.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

Link:
Ad spending is climbing, thanks to tireless consumers and AI - Marketplace

Artificial Intelligence Strategy May Promise More Widespread Use of Portable, Robotic Exoskeletons on Earth and in … – ERAU News

Safer, more efficient movements for factory workers and astronauts, and improved mobility for people with disabilities could someday become a more widespread reality, thanks to new research published June 12 in the journal Nature.

Called exoskeletons, wearable robotic frameworks for the human body promise easier movement, but technological hurdles have limited their broader application, explained Dr. Shuzhen Luo of Embry-Riddle Aeronautical University first author of the Nature paper, with corresponding author Dr. Hao Su of North Carolina State University (NC State) and other colleagues.

To date, exoskeletons must be pre-programmed for specific activities and individuals, based on lengthy, costly, labor-intensive tests with human subjects, Luo noted.

Dr. Shuzhen Luo (Photo: Embry-Riddle/ Daryl Labello)Now, researchers have described a super smart or learned controller that leverages data-intensive artificial intelligence (AI) and computer simulations to train portable, robotic exoskeletons.

This new controller provides smooth, continuous torque assistance for walking, running or climbing stairs without the need for any human-involved testing, Luo reported. With only one run on a graphics processing unit, we can train a control law or `policy, in simulation, so that the controller can effectively assist all three activities and various individuals.

Driven by three interconnected, multi-layered neural networks, the controller learns as it goes evolving through millions of epochs of musculoskeletal simulation to improve human mobility, explained Dr. Luo, assistant professor of Mechanical Engineering at Embry-Riddles Daytona Beach, Florida, campus.

The experiment-free, learning-in-simulation framework, deployed on a custom hip exoskeleton, generated what appears to be the highest metabolic rate reductions of portable hip exoskeletons to date with an average of 24.3%, 13.1% and 15.4% reduced energy expenditure by wearers, for walking, running and stair-climbing, respectively.

These energy reduction rates were calculated by comparing the performance of human subjects both with and without the robotic exoskeleton, Su of NC State explained. That means its a true measure of how much energy the exoskeleton is saving, said Su, associate professor of Mechanical and Aerospace Engineering. This work is essentially making science fiction reality allowing people to burn less energy while conducting a variety of tasks.

The approach is believed to be the first to demonstrate the feasibility of developing controllers, in simulation, that bridge the so-called simulation-to-reality, or sim2real gap, while significantly improving human performance.

Previous achievements in reinforcement learning have tended to focus primarily on simulation and board games, Luo said, whereas we proposed a new method namely, a dynamic-aware, data-driven reinforcement learningway to train and control wearable robots to directly benefit humans.

The framework may offer a generalizable and scalable strategy for the rapid, widespread deployment of a variety of assistive robots for both able-bodied and mobility-impaired individuals, added Su.

As noted, exoskeletons have traditionally required handcrafted control laws based on time-consuming human tests to handle each activity and account for differences in individual gaits, researchers explained in Nature. A learning-in-simulation approach suggested a possible solution to those obstacles.

The resulting dynamics-aware, data-driven reinforcement learning approach dramatically expedites the development of exoskeletons for real-world adoption, Luo said. The closed-loop simulation incorporates both exoskeleton controller and physics models of musculoskeletal dynamics, human-robot interaction and muscle reactions to generate efficient and realistic data. In this way, a control policy can evolve or learn in simulation.

Our method provides a foundation for turnkey solutions in controller development for wearable robots, Luo said.

Future research will focus on unique gaits, for walking, running or stair climbing, to help people who have disabilities such as stroke, osteoarthritis and cerebral palsy, as well as those with amputations.

Contributors: The Nature paper was authored by Shuzhen Luo of Embry-Riddle Aeronautical University, with Menghan Jiang, Sainan Zhang, Junxi Zhu, Shuangyue Yu, Israel Dominguez Silva and Tian Wang of North Carolina State University; and corresponding author Hao Su of North Carolina State University and the University of North Carolina at Chapel Hill; Elliott Rouse of the University of Michigan, Ann Arbor; Bolei Zhou of the University of California, Los Angeles; Hyunwoo Yuk of the Korea Advanced Institute of Science and Technology; and Xianlian Zhou of the New Jersey Institute of Technology.

Yufeng Kevin Chen of the Massachusetts Institute of Technology provided constructive feedback in support of the paper, Experiment-free exoskeleton assistance via learning in simulation.

Funding Disclosures: The research was supported in part by a National Science Foundation (NSF) CAREER award (CMMI 1944655); the National Institute on Disability, Independent Living and Rehabilitation Research (DRRP 90DPGE0019); a Switzer Research Distinguished Fellow (SFGE22000372); the NSF Future of Work (2026622); and the National Institutes of Health (1R01EB035404).

In keeping with Natures publication policies, any potential competing interests were disclosed in the paper. Su and Luo, a former postdoctoral researcher at NC State who is now on the faculty at Embry-Riddle, are co-inventors on intellectual property related to the controller described here.

Nature, June 12, D.O.I.: 10.1038/s41.586-024-07382-4. After the journals embargo lifts on June 12 at 11 a.m. ET, the paper will be online at https://www.nature.com/articles/s41586-024-07382-4.

Posted In: Engineering | Research

Originally posted here:
Artificial Intelligence Strategy May Promise More Widespread Use of Portable, Robotic Exoskeletons on Earth and in ... - ERAU News

Generative AI: Taking the Leap While Navigating Its Risks – CEOWORLD magazine

Dont let fear hold you back; take a leap of faith and see where it leads. Curious George

We are at the start of an incredible technological advance called Generative Artificial Intelligence (GAI). We dont know where it will lead us. Every day brings us more enhancements of this technology and more stories about how it is good or bad. Sometimes, it feels like we are on a precipice. It is an exciting time to be leading you get to shape the future of the organization you are leading and take advantage of all that GAI has to offer while minding the challenges that come with it.

Leaders need to understand what it is and how it can be used in decision-making.

What is GAI?

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics. Wikipedia.

If you are a novice, there are a few excellent resources you can get started with:

GAI in Business Operations and Decision-Making

AI has the potential to automate mundane tasks, freeing us for work that requires uniquely human traits such as creativity and critical thinking or, possibly, managing and curating the AIs creative output. Ethan Mollick, Co-Intelligence

In the early 2000s, Big Data gained momentum in business decision-making, noting the ability of decision-making tools to handle large amounts of data available from information systems. For example, the company I cofounded, Retail Solutions, provided analytics to retailers and CPG companies based on retail data such as point-of-sale, distribution, and inventory, which helped them make informed decisions about what to promote, how much inventory to carry, and how to prevent out-of-stock.

In the last decade, machine learning, a subset of artificial intelligence, became part of the arsenal of tools businesses use. Their use has allowed enterprises to harness the power of the data to make operations more efficient and derive valuable insights about customer behavior. An example of the use of machine learning can be found in the recommendation engines used by businesses like Netflix. The algorithms learn from the vast amount of customer data to understand each customers watching behavior and be able to suggest what to watch next based on it, and also based on customers whose tastes are similar.

Today, businesses can mine even more data with GAI, such as call center interactions, email texts, and financial reports. GAI affords businesses quick summarization of the vast amount of internal and external data. A semantic search of information available across documents, product catalogs, and knowledge bases has been made possible by the power of the large language models (LLMs) which enable GAI. My previous article, GenAI Unleashed: A Leaders Guide for Maximizing Global Impact in Talent Management, Content Creation, and Customer Support, described several business areas that can benefit from GAI.

All this power comes with some downside. The technology is in the early stages, and the LLMs tend to hallucinate and makeup falsehoods. Leaders also need to be mindful of the bias in the underlying data (which, by the way, reflects the bias of humans who generated the data). The accuracy of the GAI solutions needs improvement. However, as I mentioned in a previous article, Riding the Wave of Generative AI: Tips for Enterprise Leaders, there are three things a leader can do to get started, namely, understand where the technology is, identify how GAI can help your business, and set up experiments.

Collaboration and Augmentation

The key to success in the AI era will be to understand how to leverage AI to augment human capabilities. Unknown

Keep the words augment and collaborate in your mind in the many ways you can use GAI. Approach GAI as a tool that can work alongside humans to increase productivity.

Today, GAI is reasonably capable of generating some decisions, but humans must decide whether and how to use it.

A 2023 research paper, Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence, found that using ChatGPT for mid-level professional writing tasks substantially increased productivity. It says,

ChatGPT could increase workers productivity in two ways. On the one hand, it couldsubstitute for worker effort by quickly producing output of satisfactory quality that workersdirectly submit, letting them reduce the time they spend on the task. On the other hand, itcould complement workers skills: humans and ChatGPT working together could producemore than the sum of their parts, for example if ChatGPT aids with the brainstorming process,or quickly produces a rough draft and humans then edit and improve on the draft.

It is essential to consider the collaboration parameters when using GAI in decision-making. Richard Benjamins, former Chief Responsible AI Officer at Telefonica and founder of its AI for Society and Environment area, proposed a Choices Framework for considering ethical and responsible choices when using GAI. He defines an Ethics Continuum and Impact on Society. It has Use AI for good at one end of the continuum and Malicious use of AI at the other end, with Do not use AI if effects cannot be mitigated, Best Effort to avoid the negative impact of AI, and Negative effect of AI is considered collateral damage in between. He says organizations need to decide, based on their norms and values, where they want to be in a continuum of ethics.

Embrace GAI with Caution

The advent of General Artificial Intelligence (GAI) can be compared to historical technological and scientific breakthroughs that transformed society, such as the Industrial Revolution. Generative AI is not a panacea for all problems; therefore, understanding what it is, its benefits, and its shortcomings would be tremendously advantageous for an enterprise. The practice of holding opposable ideas in mind is precious in understanding the continuously changing world of GAI. With many voices expressing opposing views on the advances in GAI, one has to think for oneself. Understand the diverse points of view, and then decide for yourself. And, as Curious George said, dont let fear hold you back.

Have you read? Worlds Best Countries For Retirement. Worlds Best Countries For Women. Worlds Best Countries To Visit In Your Lifetime. US States With the Largest Gender Pay Gaps. CEOs who have secured the most funding during their tenure in companies in each US state.

See the rest here:
Generative AI: Taking the Leap While Navigating Its Risks - CEOWORLD magazine