Archive for the ‘Artificial Intelligence’ Category

Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip – Military & Aerospace Electronics

CHANDLER, Ariz. Microchip Technology Inc. in Chandler, Ariz., is introducing the SAMA7G54 Arm Cortex A7-based microprocessor that runs as fast as 1 GHz for low-power stereo vision applications with accurate depth perception.

The SAMA7G54 includes a MIPI CSI-2 camera interface and a traditional parallel camera interface for high-performing yet low-power artificial intelligence (AI) solutions that can be deployed at the edge, where power consumption is at a premium.

AI solutions often require advanced imaging and audio capabilities which typically are found only on multi-core microprocessors that also consume much more power.

When coupled with Microchip's MCP16502 Power Management IC (PMIC), this microprocessor enables embedded designers to fine-tune their applications for best power consumption vs. performance, while also optimizing for low overall system cost.

Related: Embedded computing sensor and signal processing meets the SWaP test

The MCP16502 is supported by Microchip's mainline Linux distribution for the SAMA7G54, allowing for easy entry and exit from available low-power modes, as well as support for dynamic voltage and frequency scaling.

For audio applications, the device has audio features such as four I2S digital audio ports, an eight-microphone array interface, an S/PDIF transmitter and receiver, as well as a stereo four-channel audio sample rate converter. It has several microphone inputs for source localization for smart speaker or video conferencing systems.

The SAMA7G54 also integrates Arm TrustZone technology with secure boot, and secure key storage and cryptography with acceleration. The SAMA7G54-EK Evaluation Kit (CPN: EV21H18A) features connectors and expansion headers for easy customization and quick access to embedded features.

For more information contact Microchip online at http://www.microchipdirect.com.

Read the original here:
Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip - Military & Aerospace Electronics

What’s Your Future of Work Path With Artificial Intelligence? – CMSWire

What does the future of artificial intelligence in the workplace look like for employee experience?

Over last few years, artificial intelligence (AI) has become a very significant part of business operations across all industries. Its already making an impact as part of our daily lives, from appliances, voice assistants, search, surveillance, marketing, autonomous vehicles, video games, TVs, to large sporting events.

AI is the result of applying cognitive science techniques to emulate human intellect and artificially create something that performs tasks that only humans can perform, like reasoning, natural communication and problem-solving. It does this by leveraging machine learning technique by reading and analyzing large data sets to identify patterns, detect anomalies and make decisions with no human intervention.

In this ever-evolving market, AI has become super crucial for businesses to upscale workplace infrastructure and improve employee experience. According to Precedence Research, the AI market size is projected to surpass around $1,597.1 billion by 2030, and is expanding growth at a CAGR of 38.1% from 2022 to 2030.

Currently, AI is being used in the workplace to automate jobs that are repetitive or require a high degree of precision, like data entry or analysis. AI can also be used to make predictions about customer behavior or market trends.

In the future, AI is expected to increasingly be used to augment human workers, providing them with recommendations or suggestions based on the data that it has been programmed to analyze.

Todays websites are capable of using AI to quickly detect potential customer intent in real-time based on interactions by the online visitor, and to show more engaging and personalized content to enhance the possibility of converting customers. As AI continues to develop, its capabilities in the workplace are expected to increase, making it an essential tool for businesses looking to stay ahead of the competition.

Kai-Fu Lee, a famous computer scientist, businessman and writer, said in a 2019 interview with CBS News, that he believes 40% of the worlds jobs will be replaced by robots capable of automating tasks.

AI has a potential to replace many types of jobs that involve mechanical or structured tasks that are repetitive in nature. Some opportunities we are seeing now are robotic vehicles, drones, surgical devices, logistics, call centers, administrative tasks like housekeeping, data entry and proofreading. Even armies of robots for security and defense are being discussed.

That said, AI is going to be a huge disruption worldwide over the next decade or so. Most innovations come from disruptions; take COVID-19 pandemic as an example, it dramatically changed how we work now.

While AI takes some jobs, it is also creates many opportunities. When it comes to strategic thinking, creativity, emotions and empathy, humans will always win over machines. This rings the bell to adapt with the change and grow human factors in workplace in all possible dimensions. Nokia and Blackberry mobile phones, Kodak cameras are the living examples of failing by not acknowledging the digital disruption. Timely market research, using the right technology and enabling the workforce to adapt for change can bring success to businesses through digital transformation.

Related Article:What's Next for Artificial Intelligence in Customer Experience?

There will be changes in the traditional means of doing things, and more jobs will be generated. AI has the potential to revolutionize the workplace, transforming how we do everything from customer service to driving cars in one of the busiest places like downtown San Francisco. However, there are still several challenges that need to be overcome before AI can be widely implemented in the workplace.

One of the biggest challenges is developing algorithms that can reliably replicate human tasks. This is often difficult because human tasks often involve common sense and reasoning, which are difficult for computers to understand. We should also ensure that AI systems are fair and unbiased. This is important because AI systems are often used to make decisions about things like hiring and promotions, and if they are biased then this can lead to discrimination. We live in the world of diversity, equity, and inclusion (DEI), and mistakes with AI can be costly for businesses. It may take a very long time to develop a customer-centric model that is completely dependent on AI, one that is reliable and trustworthy.

The future of AI is hard to predict, but there are a few key trends that are likely to shape its development. The increasing availability of data will allow AI systems to become more accurate and efficient, and as businesses and individuals rely on AI more and more, a need for new types of AI applications means more work and jobs. As these trends continue, AI is likely to have a significant impact on the workforce. It can very well lead to the automation of many cognitive tasks, including those that are currently performed by human workers.

This could result in a reduction in the overall demand for labor as well as an increase in the need for workers with skills that complement the AI systems. AI is the future of work; there's no doubt about that, but how it will shape the future of human workforce remains to be seen.

Many are worried that AI will remove many jobs, while others see it as an opportunity to increase efficiency and accuracy in the workforce. No matter which side you're on, it's important to understand how AI is changing the way we work and what that means for the future.

Related Article: 8 Examples of Artificial Intelligence in the Workplace

Let's look at few real-world examples that are already changing the way of work:

All above implementations look great. However, it is important to note that AI should be used as a supplement to human intelligence, not a replacement for it. When used properly, AI can help businesses thrive. The role of AI in the workplace is ever evolving, and it will be interesting to see how businesses adopt these technologies and improve the overall work environment to provide the best employee experience.

AnOctober 2020 Gallup pollfound that 51% of workers are not engaged they are psychologically unattached to their work and company.

Here are some employee experience aspects that AI could improve:

Employees need to know and trust that you have their best interests in mind. The value of AI in human resources is going to be critical to deliver employee experiences along with human connection and values.

Continue reading here:
What's Your Future of Work Path With Artificial Intelligence? - CMSWire

Does Artificial Intelligence Really Have the Potential to Create Transformative Art? – Literary Hub

I. The Situation

In 1896, the Lumiere brothers released a 50-second-long film, The Arrival of a Train at La Ciotat, and a myth was born. The audiences, it was reported, were so entranced by the new illusion that they jumped out of the way as the flickering image steamed towards them.

The urban legend of film-induced mass panic, established well before 1900, illustrated a valid contention if the story was, in fact, untrue: The technology had produced a new emotional reaction. That reaction was hugely powerful but inchoate and inarticulate. Nobody knew what it was doing or where it would go. Nobody had any idea that it would turn into what we call film. Today, the world is in a similar state of bountiful confusion over the creative use of artificial intelligence.

Already the power of the new technology is evident to everyone who has managed to use it. Artificial intelligence can recreate the speaking voice of dead persons. It can produce images from instructions. It can fill in the missing passages from damaged texts. It can imitate any and all literary styles. It can convert any given authorial corpus into logarithmic probability. It can create characters that speak in unpredictable but convincing ways. It can write basic newspaper articles. It can compose adequate melodies. But what any of this means, or to what uses these new abilities will ultimately be turned, are as yet unclear.

There is some fascinating creative work emerging from this primordial ooze of nascent natural language processing (NLP). Vauhini Vatas GPT-based requiem for her sister and the poetry of Sasha Stiles are experiments in the avant garde tradition. (My own NLP-work falls into this category as well, including the short story this essay accompanies.)

Then there are attempts to use AI in more popular media. Dungeon AI, which is an infinitely generated text adventure driven by deep learning, explores the gaming possibilities. Perhaps the most exciting format for NLP is in bot-character generation. Project December allows its users to recreate dead people, to have conversations with them. But theres no need for these generated voices to be based on actual human beings. Lucas Rizzotto concocted a childhood imaginary friend, Magnetron, which existed inside his familys microwave, out of OpenAI and a hundred-page backstory.

These early attempts to find spheres of expression for the new technology are dynamic and exciting, but they remain marginal. This work has not yet resonated with the public, nor has it solidified into coherent practice.

The scattered few of us who use this technology feel its eerie power. The encounter with deep learning is simultaneously ultramodern and ancient, manufacturing an unsettling impression of being recognized by a machine, or of having access, through machines, to a vast human pattern, even a collective unconscious or noosphere. But that sensation has not yet been communicated to audiences. They dont participate in it. They see only the results, the words on the page, which are little more than aftereffects.

The literary world tends to engage creative technology with either petulant resistance or slavish adulation. Neither are particularly useful. A novel about social media is still considered surprisingly innovative, and even the smartphone rarely makes an appearance in literary fiction.

Recent novels about artificial intelligence, such as Klara and the Sun by Kazuo Ishiguro or Machines Like Me by Ian McEwan, have absolutely nothing to do with actual artificial intelligence as it currently exists or will exist in the foreseeable future. They are, frankly, embarrassingly lazy on the subject.

Meanwhile, the hacker aesthetic has had its basic fraud exposed: it fantasized technologists as rebel outsiders, poised to make the world a better place, as a cover for monopolists who need excuses to justify their hunger for total impunity.

Both the resistance and the adulation are stupid, and so we find ourselves toxically ill-prepared for the moment we are facing: the intrusion of technology into the creative process. The machines are no longer lurking on the periphery; they are entering the temple, piercing the creative act itself.

The Lumiere brothers produced roughly 1,400 minute-length films, or views as they were called at the time, but nobody could see what these views would blossom into: A Trip to the Moon, and Birth of a Nation, and Citizen Kane, and Vertigo, and Apocalypse Now. Creative AI is not a new technique. It is an entirely new artistic medium. It needs to be developed as such. The question facing the small band of creators using artificial intelligence today is how we get from The Arrival of a Train at La Ciotat to Citizen Kane.

II. The Direction

One thing is certain: Nobody needs machines to make shitty poetry. Humans make quite enough of that already. The blossoming of AI art into its unique and particular reality will demand a unique and particular practice, one that sheds traditional categories of art as they currently exist and which engages audiences in ways they have never been engaged before.

One potential danger, at least in the short term, is that the technology is advancing so quickly it is unclear whether any artistic practice that emerges from it will have time to mature before it becomes obsolete.

Every example of creative AI I have listed above uses GPT-3 (enerative Pre-trained Transformer 3). But Google just very recently released its own Transformer-based large language model, PaLM, which promises low-level reasoning functions. What does that mean? What can be built from that new function? Art requires technical mastery, and also conscious transcendence of technical mastery. Even keeping up with the latest AI developments, never mind getting access to the tech, is a full-time job. And art that does nothing more than show off the power of a machine isnt doing its job.

Then there is the question of whether anyone wants computer-generated art. One of the somewhat confounding aspects of the internet generally is that it is hugely creative but fundamentally resistant to art, or at least to anything that identifies itself as art. TikTok has turned into a venue of explosive creativity but there is no Martin Scorcese of TikTok, nor could there ever be. Internet-specific genres, like Vine, are inherently ephemeral and impersonal. They arent art forms so much as widespread crafting activities, like Victorian-era collages, or Japanese Chigiri-e, or Ukrainian pysanky.

When people want to read consciously made, individually controlled language, they tend to pick up physically printed books, as ridiculous as that sounds. Creators follow the audiences. The top ten novels published this year are not fundamentally different, in their modes of composition, dissemination and consumption, from the novels of the 1950s.

But the resistance creative AI faces, both from artists and from audiences, is a sign of the power and potential of the new medium. The most exciting promise of creative AI is that it runs in complete opposition to the overarching value that defines contemporary art: Identity. The practice itself removes identity from the equation.

Since so few people have used this technology, Im afraid Ill have to use the short story that accompanies this essay as an example, although, to be clear, many people are using this tech in completely different ways and my own approach is representative of nothing but my own fascinations and capacities.

A few months ago, I received access to the product of a Canadian AI company called Cohere, which allows for sophisticated, nimble manipulations of Natural Language Processing. Through Cohere, I was able to create algorithms derived from various styles. These included Thomas Browne, Eileen Chang, Dickens, Shakespeare, Chekhov, Hemingway and others, including anthologies of love stories and Chinese nature poetry.

I then took those algorithms and had them write sentences and paragraphs for me on selected themes: a marketplace, love at first sight, a life played out after falling in love. The ones I liked I kept. The ones I didnt I threw out. Then I took the passages those algorithms had provided and input them to Sudowrite, the stochastic writing tool. Sudowrite generated texts on the basis of the prompts the other algorithms had generated.

To generate Autotuned Love Story I had to develop a separate artistic practice around the technology. Im not proposing my practice as a model; in fact, now that Ive done it, I dont see why anyone else would do what Ive done. My point is that what I created here and how I created here is distinct from traditional artistic creation.

The love story below is my attempt to develop an idealized love story out of all the love stories that I have admired. It exists on the line between art and criticism. Autotuned Love Story certainly isnt mine. I built it but its not my love story. Its the love story of the machines interacting with all the love stories I have loved. I confess that I find it eerie; there is something true and moving in it that I recognize but which I also cant place.

Creative AI is not an expression of a self. Rather it is the permutation and recombination and reframing of other identities. It is not, nor will it be, nor can it be, a representation of a generation or a race or a time. It is not a voice. Whatever voice is, it is the opposite. The process of using creative AI is literally derivative. The power of creative AI is its strange mixture of human and other. The revelation of the medium will be the exploitation of that fact.

Because creative AI is not self-expression, its development will be different from other media. On that basis, two propositions:

Artists should not use artificial intelligence to make art that people could make otherwise.

The display of technology cannot be the purpose of the art.

Creative AI should, above all, be itself and not something else. And secondly it should allow users to forget that its artificial intelligence altogether. Otherwise it will be little more than advertising for the tech, or an alibi for the artist.

Fortunately, there is a predecessor that can serve as a model, and which follows the two directions above: Hip hop. Hip hop was an art form determined, from its inception, by technological innovation. Kool Herc invented the two-turntable setup that allowed the isolation of the break, and Grandmaster Flash developed backspin, punch phrasing, and scratching. These developments required enormous technical facility but also a concentration on effects. The artists shaped the tech in response to audience reactions.

Hip hop also demanded an entirely new musicality to maximize the effects of the innovation. Building beats and sampling required a comprehensive musical knowledge. The best DJs had the widest access to music of all kinds, and were each, in a sense, archivists. They engaged in raids on the past, using history for their own purposes.

Just as hip hop artists developed a consummate familiarity with earlier forms of popular music, the artists of artificial intelligence who use large language models will need to understand the history of the sentence and the development of literary style in all forms and across all genres. Linguistic AI will demand the skills of close reading and a historical breadth as the basic terms of creation.

And when we look at the bad AI art available now the failings of the art are almost never technical. Its usually a failure to possess deep knowledge, or sometimes any knowledge, of narrative technique or poesis.

In its early years, hip hop had a defiance and a focus on effect which AI art should aspire to. They showed a willingness and capacity to create and abandon values. They did not worship their instruments. They concentrated on the results, and that spirit largely survives. A good question to ask as a rough guide to the creative direction of AI art: What would Ye do? WWYD?

III. The Stakes

Creative AI promises more powerful illusions and more all-consuming worlds. Eric Schmidt, at The Atlantic, recently offered an example of the future awaiting us:

If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, I dont really like this television show. And the kid says, Yeah, I agree with you.

Despite this terrifying promise, AI art will probably remain small and marginal in the short term, just as film was for several decades after its birth.

The development of creative AI is much, much more important than how cool the new short stories or interactive games can be. For one thing, artistic practice may serve as a desperately needed bridge between artificial intelligence and the humanities. As it stands, those who understand literature and history dont understand the technology that is about to transform the framework of language, and those who are building the technology that is revolutionizing language dont understand literature or history.

Also, the political uses of artificial intelligence will follow creative practices. Thats certainly what happened with film. A few decades after the The Arrival of a Train at La Ciotat, Lenin was using film as the primary propaganda method of the Soviet Union, and the proto-fascist Gabriele DAnnunzio filmed his triumphal entrance into the city of Fiume. Whatever forms creative AI takes will, almost immediately, be used to manipulate and control mass audiences.

Creative AI is a confrontation with the fact that an unknown number of aspects of art, so vital to our sense of human freedom, can be reduced to algorithms, to a series of external instructions. Marovecs paradoxthat the more complex and high-level a task, the easier it is to computeis fully at play. Capacities requiring a lifetime of dedication to master, like a personal literary style, can simply be programmed. The basic things remain mysteries. What makes an image powerful? What makes a story compelling? The computers have no answers to these questions.

There is a line thrusting through the world and ourselves dividing what is computable from what is not. It is a line driving straight into the heart of the mystery of humanity. AI art will ride on this line.

_________________________________________________

[This story was generated by means of natural language processing, using Cohere AI and Sudowrite accessing GPT-3.]

The rain in the market smelled like rusting metal and wet stones. The stallholders had no real need to sell nor did they care much for their customers. There was a cookery demonstration. There was a magician. There was a video games stall. There was a beauty parlour. The rain was like a mist at first, fine and barely noticeable, but not long after the streets were flowing with a torrent of mud and water.

Among huddles of people, they met in a stall that sold umbrellas. The eyes of one were large and green, soft and milky. The others eyes were like iced coffee.

Shyness came upon them at once. Shyness and fear. A butchers boy, with a beautiful nose, stood beside a post, making grimaces at a plan that was chalked out on the top of it. A ragged little boy, barefooted, and with his face smeared with blood, from having just grazed his nose against the corner of a post, began playing at marbles with other boys of his own size. Their smiles were interminable, wavering and forgetful, and it seemed as though they could not control their lips, that they smiled against their will while they thought of something else.

Alone?

Yes.

The rain became like a dirty great mop being wrung out above their heads. The market became more uneasy, and gave place to a sea of noises that on both sides added to the general clamour. The crowd began to press in on them, to snatch at their coats, to groan, to criticize and to complain of cold and hunger, of want of clean clothes, of lack of decent shelter. The rain was unremittingjust like the flow of people, the flow of traffic, the flow of tired animals. The crowd erupted and all at once it seemed that there were too many people.

When the crowd closed up again, the two were separated from one another. The rain died down and the market was now very different. They looked for each other like lost children in a train station. It was a different kind of a market, darker, older, dingier, more chaotic. The pavement was covered with mud and mire and straw and dung.

They met by accident, which is only a way of saying that we have not looked for something before it comes forward, that they were both in the world and the world is small.

*

They never met again, or maybe they did.

Maybe, at first, they had the same delight in touching, in meeting, in forming, in blurring, in drawing out. They had secrets, and they shared those secrets. As ones hands rolled over the other, they lay as still as fish. It seemed to both of them that they could not live in the old way; they could not go on living as though there were nothing new in their lives. They had to settle down together somewhere, to live for themselves, alone, to have their own home, where they would be their own masters. They went abroad, changed their lives. One was a manager of a railway branch line. The other became a teacher in a school. And the large study in which they spent their evenings was so full of pictures and flowers that it was difficult to move about without upsetting something. Pictures of all sorts, landscapes in water-colour, engravings after the old masters, and the albums filled with the photographs of relatives, friends, and children, were scattered everywhere about the bookcases, on the tables, on the chairs. Love is like money: the kind you have and do not want to lose, the kind you lose and treasure. The thought of death, which had moved them so profoundly, no longer caused in either the former fear and remorse, a sound that lost its echo in the endless, sad retreat, a phantom of caresses down hallways empty and forsaken.

Maybe they lived that life. Maybe they didnt. But in the market, among the detritus, the splintered edges, they had once found each other, and found each other and lost each other again. They had said only that, yes, they were alone.

The rain had smelled like sodden horses and rusting metal and wet stones.

Read more:
Does Artificial Intelligence Really Have the Potential to Create Transformative Art? - Literary Hub

How artificial intelligence is boosting crop yield to feed the world – Freethink

Over the last several decades, genetic research has seen incredible advances in gene sequencing technologies. In 2004, scientists completed the Human Genome Project, an ambitious project to sequence the human genome, which cost $3 billion and took 10 years. Now, a person can get their genome sequenced for less than $1,000 and within about 24 hours.

Scientists capitalized on these advances by sequencing everything from the elusive giant squid to the Ethiopian eggplant. With this technology came promises of miraculous breakthroughs: all diseases would be cured and world hunger would be a thing of the past.

So, where are these miracles?

We need about 60 to 70% more food production by 2050.

In 2015, a group of researchers founded Yield10 Bioscience, an agriculture biotech company that aimed to use artificial intelligence to start making those promises into reality.

Two things drove the development of Yield10 Bioscience.

One, obviously, [the need for] global food security: we need about 60 to 70% more food production by 2050, explained Dr. Oliver Peoples, CEO of Yield10 Bioscience, in an interview with Freethink. And then, of course, CRISPR.

It turns out that having the tools to sequence DNA is only step one of manufacturing the miracles we were promised.

The second step is figuring out what a sequence of DNA actually does. In other words, its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

In order to do this, scientists manipulate the gene: delete it from an organism and see what functions are lost, or add it to an organism and see what is gained. During the early genetics revolution, although scientists had tools to easily and accurately sequence DNA, their tools to manipulate DNA were labor-intensive and cumbersome.

Its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

Around 2012, CRISPR technology burst onto the scene, and it changed everything. Scientists had been investigating CRISPR a system that evolved in bacteria to fight off viruses since the 80s, but it took 30 years for them to finally understand how they could use it to edit genes in any organism.

Suddenly, scientists had a powerful tool that could easily manipulate genomes. Equipped with DNA sequencing and editing tools, scientists could complete studies that once took years or even decades in mere months.

Promises of miracles poured back in, with renewed vigor: CRISPR would eliminate genetic disorders and feed the world! But of course, there is yet another step: figuring out which genes to edit.

Over the last couple of decades, researchers have compiled databases of millions of genes. For example, GenBank, the National Institute of Healths (NIH) genetic sequence database, contains 38,086,233 genes, of which only tens of thousands have some functional information.

For example, ARGOS is a gene involved in plant growth. Consequently, it is a very well-studied gene. Scientists found that genetically engineering Arabidopsis, a fast-growing plant commonly used to study plant biology, to express lots of ARGOS made the plant grow faster.

Dozens of other plants have ARGOS (or at least genes very similar to it), such as pineapple, radish, and winter squash. Those plants, however, are hard to genetically manipulate compared to Arabidopsis. Thus, ARGOSs function in crops in general hasnt been as well studied.

The big crop companies are struggling to figure out what to do with CRISPR.

CRISPR suddenly changed the landscape for small groups of researchers hoping to innovate in agriculture. It was an affordable technology that anyone could use but no one knew what to do with it. Even the largest research corporations in the world dont have the resources to test all the genes that have been identified.

I think if you talk to all the big crop companies, theyve all got big investments in CRISPR. And I think theyre all struggling with the same question, which is, This is a great tool. What do I do with it? said Dr. Peoples.

The algorithm can identify genes that act at a fundamental level in crop metabolism.

The holy grail of crop science, according to Dr. Peoples, would be a tool that could identify three or four genetic changes that would double crop production for whatever youre growing.

With CRISPR, those changes could be made right now. However, there needs to be a way to identify those changes, and that information is buried in the massive databases.

To develop the tool that can dig them out, Dr. Peoples team merged artificial intelligence with synthetic biology, a field of science that involves redesigning organisms to have useful new abilities, such as increasing crop yield or bioplastic production.

This union created Gene Ranking Artificial Intelligence Network (GRAIN), an algorithm that evaluates scientific databases like GenBank and identifies genes that act at a fundamental level in crop metabolism.

That fundamental level aspect is one of the keys to GRAINs long-term success. It identifies genes that are common across multiple crop types, so when a powerful gene is identified, it can be used across multiple crop types.

For example, using the GRAIN platform, Dr. Peoples and his team identified four genes that may significantly impact seed oil content in Camelina, a plant similar to rapeseed (true canola oil). When the researchers increased the activity of just one of those genes via CRISPR, the plants had a 10% increase in seed oil content.

Its not quite a miracle yet, but with more advances in gene editing and AI happening all the time, the promises of the genetic revolution are finally starting to pay off.

Wed love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us attips@freethink.com.

View post:
How artificial intelligence is boosting crop yield to feed the world - Freethink

Instagram is testing artificial intelligence that verifies your age with a selfie scan – WWAY NewsChannel 3

(CNN) Instagram is testing new ways to verify its youngest users ages, including by using artificial intelligence that analyzes a photo and estimates how old the user is.

Meta-owned Instagramsaidin a blog post on Thursday that AI is one of three new methods its testing to verify users ages on the photo-sharing site. Users will be required to use one of the options to verify their age if they edit their birth date on Instagram from under age 18 to over 18.

Instagram is testing these options first with its users in the United States. Italready requires usersto state their age when they start using the service, andemploys AI in other waysto determine if users are kids or adults.

The move is part of anongoing pushto make sure the photo-sharing apps youngest users see content that is age-appropriate. It comes less than a year after disclosures from a Facebook whistleblower raised concerns about the platformsimpact on younger users. Last year, Instagram came under fire when documents leaked by thewhistleblower, Frances Haugen, showed it was aware of how the social media site can damage mental health and body image, particularly among teenage girls.

The technology comes from a London-based company calledYoti. An animatedvideothat Instagram posted to its blog gives a sense for how Yotis AI age-estimation works: A user is directed to take a video selfie on their smartphone (Yoti said this step serves as a way to make sure a real person is in the resulting image), and Instagram shares an image from that selfie with the company. Yotis AI first detects that there is a face in the picture and then scrutinizes its facial features to determine the persons age.

Julie Dawson, Yotis chief policy and regulatory officer, told CNN Business that its AI was trained with a dataset made up of images of peoples faces along with the year and month that person was born. (Documentationthe company released in May to explain its technology said it was trained on millions of diverse facial images.)

When a new face comes along, it does a pixel-level analysis of that face and then spits out a number the age estimation with a confidence value, Dawson said. Once the estimation is completed, Yoti and Instagram delete the selfie video and the still image taken from it.

Verifying a users age can be a vexing problem for tech companies, in part because plenty of users may not have a government-issued photo ID card that can be checked.

Karl Ricanek, a professor at the University of North Carolina Wilmington and director of the schools Face Aging Group Research Lab, thinks Yotis technology is a good application of AI.

Its a worthwhile endeavor to try and protect kids, he said.

Yet while such technology could be helpful to Instagram, a number of factors can make it tricky to accurately estimate age from a picture, Ricanek said, including puberty which changes a persons facial structure as well as skin tone and gender.

Therecent documentationfrom Yoti indicates its technology is, on average, slightly less accurate at estimating the ages of kids who are between 13 to 17 and have darker skin tones than those with lighter skin tones. According to Yotis data, its age estimate was off, on average, by 1.91 years for females ages 13 to 17 whose skin tones were categorized as the two darkest shades on the Fitzpatrick scale a six-shade scale thats commonly used by tech companies to classify colors of skin versus an average error of 1.41 years for females in the same age group whose skin tones were the two lightest shades on the scale.

For kids between the ages of 13 to 17, the technologys estimate of how old they are was off by 1.56 years, on average, according to the document. (For teenagers overall, the average error rate is 1.52 years.)

What that means, in practice, is that there will be a lot of errors, said Luke Stark, an assistant professor at Western University in Ontario, Canada, who studies the ethical and social implications of AI. Were still taking about a mean absolute error, either way, of a year to a year and a half, he said.

Several CNN employees all adults over the age of 25 tried anonline demoof Yotis age-estimation technology. The demo differs from the experience Instagram users will have in that it takes a selfie, rather than a short video, and the result is an age-range estimation, rather than a specific age estimation, Yotis chief marketing officer, Chris Field, said.

The results varied: For a couple of reporters, the estimated age range was right on target, but for others it was off by many years. For instance, it estimated one editor was between the ages of 17 and 21, when theyre actually in their mid-30s.

Among other issues, Stark is also concerned that the technology will contribute to so-called surveillance creep.

Its certainly problematic, because it conditions people to assume theyre going to be surveilled and assessed, he said.

See the original post:
Instagram is testing artificial intelligence that verifies your age with a selfie scan - WWAY NewsChannel 3