Archive for the ‘Machine Learning’ Category

The 7 Best Websites to Help Kids Learn About AI and Machine Learning – MUO – MakeUseOf

If you have kids or teach kids, you likely want them to learn the latest technologies to help them succeed in school and their future jobs. With rapid tech advancements, artificial intelligence and machine learning are essential skills you can teach young learners today.

Thankfully, you can easily access free and paid online resources to support your kids' and teens' learning journey. Here, we explore some of the best e-learning websites for students to gain experience in AI and ML technology.

Do you want to empower your child's creativity and AI skills? You might want to schedule a demo session with Kubrio. The alternative education website offers remote learning experiences on the latest technologies like ChatGPT.

Students eight to 18 years old learn about diverse subjects at their own pace. At the same time, they get to team up with learners who share their interests.

Kubrios AI Prompt Engineering Lab teaches your kids to use the best online AI tools for content creation. Theyll learn to develop captivating stories, interactive games, professional-quality movies, engaging podcasts, catchy songs, aesthetic designs, and software.

Kubrio also gamifies AI learning in the form of "Quests." Students select their Quest, complete their creative challenge, build a portfolio, and earn points and badges. This program is currently in beta, but you can sign them up for the private beta for the following Quests:

Explore the Create&Learn website if you want to introduce your kids to the latest technological advancements at an early age. The e-learning site is packed with classes that help kids discover the fascinating world of robots, artificial intelligence, and machine learning.

Depending on their grade level, your child can join AI classes such as Hello Tech!, AI Explorers, Python for AI, and AI Creators. The classes are live online, interactive, and hands-on. Students from grades two up to 12 learn how AI works and can be applied to the latest technology, such as self-driving cars, face recognition, and games.

Create&Learns award-winning curriculum was designed by experts from well-known institutions like MIT and Stanford. But if you aren't sure your kids will enjoy the sessions, you can avail of a free introductory class (this option is available for select classes only).

One of the best ways for students to learn ML and AI is through hands-on machine learning project ideas for beginners. Machine Learning for Kids gives students hands-on training with machine learning, a subfield of AI that enables computers to learn from data and experience.

Your kids will train a computer to recognize text, pictures, numbers, or sounds. For instance, you can train the model to distinguish between images of a happy person and a sad person using free photos from the internet. We tried this, and then tested the model with a new photo, and it was able to successfully recognize the uploaded image as a happy person.

Afterward, your child will try their hand at the Scratch, Python, or App Inventor coding platform to create projects and build games with their trained machine learning model.

The online platform is free, simple, and user-friendly. You'll get access to worksheets, lesson plans, and tutorials, so you can learn with your kids. Your child will also be guided through the main steps of completing a simple machine learning project.

If you and your kids are curious about how artificial intelligence and machine learning work, go through Experiments with Google. The free website explains machine learning and AI through simple, interactive projects for learners of different ages.

Experiments with Google is a highly engaging platform that will give students hours of fun and learning. Your child will learn to build a DIY sorter using machine learning, create and chat with a fictional character, conduct their own orchestra, use a camera to bring their doodles to life, and more.

Many of the experiments don't require coding. Choose the projects appropriate for your child's level. If youre working with younger kids, try Scroobly; Quick, Draw!; and LipSync with YouTube. Meanwhile, teens can learn how experts build a neural network to learn about AI or explore other, more complex projects using AI.

Do you want to teach your child how to create amazing things with AI? If yes, then AI World School is an ideal edtech platform for you. The e-learning website offers online and self-learning AI and coding courses for kids and teens seven years old and above.

AI World School courses are designed by a team of educators and technologists. The courses cover AI Novus (an introduction to AI for ages seven to ten), Virtual Driverless Car, Playful AI Explorations Using Scratch, and more.

The website also provides affordable resources for parents and educators who want to empower their students to be future-ready. Just visit the Project Hub to order $1-3 AI projects, you can filter by age group, skill level, and software.

Kids and teens can also try the free games when they click Play AI for Free. Converse with an AI model named Zhorai, teach it about animals, and let it guess where these animals live. Students can also ask an AI bot about the weather in any city, or challenge it to a competitive game of tic-tac-toe.

AIClub is a team of AI and software experts with real-world experience. It was founded by Dr. Nisha Tagala, a computer science Ph.D. graduate from UC Berkeley. After failing to find a fun and easy program to help her 11-year-old daughter learn AI, she went ahead and built her own.

AI Club's progressive curriculum is designed for elementary, middle school, and high school students. Your child will learn to create unique projects using AI and coding. Start them young, and they can flex their own AI portfolio to the world.

You can also opt to enroll your child in the one-on-one class with expert mentors. This personalized online class enables students to research topics they care about on a flexible schedule. They'll also receive feedback and advice from their mentor to improve their research.

What's more, students enrolled in one-on-one classes can enter their research in competitions or present their findings at a conference. According to the AIClub Competition Winners page, several students in the program have already been awarded in national and international competitions.

Have you ever wondered how machines can learn from data and perform tasks that humans can do? Check out Teachable Machine, a website by Google Developers that lets you create your own machine learning models in minutes.

Teachable Machine is a fun way for kids and teens to start learning the concepts and applications of machine learning. You don't need any coding skills or prior knowledge, just your webcam, microphone, or images.

Students can play with images, sounds, poses, text, and more. They'll understand how tweaking the settings and data changes the performance and accuracy of the models.

Teachable Machine is a learning tool and a creative platform that unleashes the imagination. Your child can use their models to create games, art, music, or anything else they can dream of. If they need inspiration, point them to the gallery of projects created by other users.

Artificial intelligence and machine learning are rapidly transforming the world. If you want your kids and teens to learn about these fascinating fields and develop their critical thinking skills and creativity, these websites that can help them.

Whether you want to explore Experiments with Google, AI World School, or other sites in this article, you'll find plenty of resources and fun challenges to spark your child's curiosity and imagination. There are also ways to use existing AI tools in school so that they can become more familiar with them.

Read this article:
The 7 Best Websites to Help Kids Learn About AI and Machine Learning - MUO - MakeUseOf

Google and OpenAI are Walmarts besieged by fruit stands – TechCrunch

Image Credits: Tim Boyle / Getty Images

OpenAI may be synonymous with machine learning now and Google is doing its best to pick itself up off the floor, but both may soon face a new threat: rapidly multiplying open source projects that push the state of the art and leave the deep-pocketed but unwieldy corporations in their dust. This Zerg-like threat may not be an existential one, but it will certainly keep the dominant players on the defensive.

The notion is not new by a long shot in the fast-moving AI community, its expected to see this kind of disruption on a weekly basis but the situation was put in perspective by a widely shared document purported to originate within Google. We have no moat, and neither does OpenAI, the memo reads.

I wont encumber the reader with a lengthy summary of this perfectly readable and interesting piece, but the gist is that while GPT-4 and other proprietary models have obtained the lions share of attention and indeed income, the head start theyve gained with funding and infrastructure is looking slimmer by the day.

While the pace of OpenAIs releases may seem blistering by the standards of ordinary major software releases, GPT-3, ChatGPT and GPT-4 were certainly hot on each others heels if you compare them to versions of iOS or Photoshop. But they are still occurring on the scale of months and years.

What the memo points out is that in March, a leaked foundation language model from Meta, called LLaMA, was leaked in fairly rough form. Within weeks, people tinkering around on laptops and penny-a-minute servers had added core features like instruction tuning, multiple modalities and reinforcement learning from human feedback. OpenAI and Google were probably poking around the code, too, but they didnt couldnt replicate the level of collaboration and experimentation occurring in subreddits and Discords.

Could it really be that the titanic computation problem that seemed to pose an insurmountable obstacle a moat to challengers is already a relic of a different era of AI development?

Sam Altman already noted that we should expect diminishing returns when throwing parameters at the problem. Bigger isnt always better, sure but few would have guessed that smaller was instead.

The business paradigm being pursued by OpenAI and others right now is a direct descendant of the SaaS model. You have some software or service of high value and you offer carefully gated access to it through an API or some such. Its a straightforward and proven approach that makes perfect sense when youve invested hundreds of millions into developing a single monolithic yet versatile product like a large language model.

If GPT-4 generalizes well to answering questions about precedents in contract law, great never mind that a huge number of its intellect is dedicated to being able to parrot the style of every author who ever published a work in the English language. GPT-4 is like a Walmart. No one actually wants to go there, so the company makes damn sure theres no other option.

But customers are starting to wonder, why am I walking through 50 aisles of junk to buy a few apples? Why am I hiring the services of the largest and most general-purpose AI model ever created if all I want to do is exert some intelligence in matching the language of this contract against a couple hundred other ones? At the risk of torturing the metaphor (to say nothing of the reader), if GPT-4 is the Walmart you go to for apples, what happens when a fruit stand opens in the parking lot?

It didnt take long in the AI world for a large language model to be run, in highly truncated form of course, on (fittingly) a Raspberry Pi. For a business like OpenAI, its jockey Microsoft, Google or anyone else in the AI-as-a-service world, it effectively beggars the entire premise of their business: that these systems are so hard to build and run that they have to do it for you. In fact it starts to look like these companies picked and engineered a version of AI that fit their existing business model, not vice versa!

Once upon a time you had to offload the computation involved in word processing to a mainframe your terminal was just a display. Of course that was a different era, and weve long since been able to fit the whole application on a personal computer. That process has occurred many times since as our devices have repeatedly and exponentially increased their capacity for computation. These days when something has to be done on a supercomputer, everyone understands that its just a matter of time and optimization.

For Google and OpenAI, the time came a lot quicker than expected. And they werent the ones to do the optimizing and may never be at this rate.

Now, that doesnt mean that theyre plain out of luck. Google didnt get where it is by being the best not for a long time, anyway. Being a Walmart has its benefits. Companies dont want to have to find the bespoke solution that performs the task they want 30% faster if they can get a decent price from their existing vendor and not rock the boat too much. Never underestimate the value of inertia in business!

Sure, people are iterating on LLaMA so fast that theyre running out of camelids to name them after. Incidentally, Id like to thank the developers for an excuse to just scroll through hundreds of pictures of cute, tawny vicuas instead of working. But few enterprise IT departments are going to cobble together an implementation of Stabilitys open source derivative-in-progress of a quasi-legal leaked Meta model over OpenAIs simple, effective API. Theyve got a business to run!

But at the same time, I stopped using Photoshop years ago for image editing and creation because the open source options like Gimp and Paint.net have gotten so incredibly good. At this point, the argument goes the other direction. Pay how much for Photoshop? No way, weve got a business to run!

What Googles anonymous authors are clearly worried about is that the distance from the first situation to the second is going to be much shorter than anyone thought, and there doesnt appear to be a damn thing anybody can do about it.

Except, the memo argues: embrace it. Open up, publish, collaborate, share, compromise. As they conclude:

Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.

Visit link:
Google and OpenAI are Walmarts besieged by fruit stands - TechCrunch

Meta Platforms scoops up AI networking chip team from Graphcore – The Economic Times

Meta Platforms Inc has hired an Oslo-based team that until late last year was building artificial intelligence networking technology at British chip unicorn Graphcore. A Meta spokesperson confirmed the hirings in response to a request for comment, after Reuters identified 10 people whose LinkedIn profiles said they worked at Graphcore until December 2022 or January 2023 and subsequently joined Meta in February or March of this year.

"We recently welcomed a number of highly-specialized engineers in Oslo to our infrastructure team at Meta. They bring deep expertise in the design and development of supercomputing systems to support AI and machine learning at scale in Meta's data centers," said Jon Carvill, the Meta spokesperson.

On top of that, it is now rushing to join competitors like Microsoft Corp and Alphabet Inc's Google in releasing generative AI products capable of creating human-like writing, art and other content, which investors see as the next big growth area for tech companies.

Carvill declined to say what they would be working on at Meta.

Meta already has an in-house unit designing several kinds of chips aimed at speeding up and maximizing efficiency for its AI work, including a network chip that performs a sort of air traffic control function for servers, two sources told Reuters.

A new category of network chip has emerged to help keep data moving smoothly within those computing clusters. Nvidia, AMD and Intel Corp all make such network chips.

Graphcore, one of the UK's most valuable tech startups, once was seen by investors like Microsoft and venture capital firm Sequoia as a promising potential challenger to Nvidia's commanding lead in the market for AI chip systems.

However, it faced a setback in 2020 when Microsoft scrapped an early deal to buy Graphcore's chips for its Azure cloud computing platform, according to a report by UK newspaper The Times. Microsoft instead used Nvidia's GPUs to build the massive infrastructure powering ChatGPT developer OpenAI, which Microsoft also backs.

Sequoia has since written down its investment in Graphcore to zero, although it remains on the company's board, according to a source familiar with the relationship. The write-down was first reported by Insider in October.

The Graphcore spokesperson confirmed the setbacks, but said the company was "perfectly positioned" to take advantage of accelerating commercial adoption of AI.

Graphcore was last valued at $2.8 billion after raising $222 million in its most recent investment round in 2020.

See the original post:
Meta Platforms scoops up AI networking chip team from Graphcore - The Economic Times

How to get going with machine learning – Robotics and Automation News

We can see everyone around us talking about machine learning and artificial intelligence. But is the hype of machine learning objective? Lets dive into the details of machine learning and how we can start it from scratch.

Machine learning is a technological method through which we teach our computers and electronic gadgets how to provide accurate answers. Whenever data is fed into the system, it acts in a defined way to find precise answers to those questions asked.

For example, questions such as: What is the taste of avocado?, What are the things to consider for buying an old car?, How do I drive safely on reload?, and so on.

But using machine language, the computer is trained to give precise answers even without input from developers. In other words, machine language is a sophisticated form of language in which computers are trained to provide correct answers to complicated questions.

Furthermore, they are trained to learn more, distinguish confusing questions, and provide satisfactory answers.

Machine learning and AI is the future. Therefore, people who can learn skills and become proficient will become the first in line to reap the profits. We have companies that offer machine learning services to augment your business.

In other words, to get unreal advantages, we must engage with these services for the exponential growth of our business.

Initially, the developers do a massive number of training and modeling. Other crucial things are also done by the developers for machine language development. Additionally, vast amounts of data are used to provide precise results and effectively reduce the decision taking time.

Here are the simple steps that can get you started with machine learning.

Make up your mind and choose a tool in which you want to master machine learning development.

Always look for the best language in terms of practicality and its acceptability on multiple platforms.

As we know, Machine learning is a process that involves a rigorous process of modeling and training. Therefore we must practice the given below bullet points.

To take the most advantage, create a delicate and lucid portfolio of yours to demonstrate your learned skills to the world. Keep in mind the below-mentioned bullet points too.

When we apply a precise algorithm to a data set, the output we get is called a Model. In other words, it is also known as Hypothesis.

In technical terms, a feature is a quantifiable property that defines the characteristics of a process in machine learning. One of the crucial characteristics of it is to recognize and classify algorithms. It is used as input into a model.

For example, to recognize a fruit, it uses features such as smell, taste, size, color, and so on. The element is vital in distinguishing the target or asked query using several characteristics.

The highest level of value or variable created by the machine learning model is called Target.

For example, In the previous set, we measured fruits. Each label has a specific fruit such as orange, banana, apple, pineapple, and so on.

In machine learning, Training is a term used for getting used to all the values and biases of our target examples. Under supervision during the learning process, many experiments are done to build a machine learning algorithm to reach the minimum loss going the correct output.

When a model is accomplished, we can set a variety of inputs that will give us the expected results as output. Always be careful and look that system is performing accurately on unseen data. Then only we can say it is a successful operation.

After preparing our model, we can input a set of data for which it will generate a predicted output or label. However, verifying its performance on new, untested data is essential before concluding that the machine is performing well.

As machine learning continues to increase in significance to enterprise operations and AI becomes more sensible in corporation settings, the machine learning platform wars will accentuate handiest.

Persisted research into deep studying and ai is increasingly targeted at developing different general applications. Cutting-edge AI models require sizeable training to produce an algorithm that is particularly optimized to perform one venture.

But some researchers are exploring approaches to make fashions greater bendy and are searching for techniques that allow a device to use context discovered from one project to future, specific tasks.

You might also like

Read the original:
How to get going with machine learning - Robotics and Automation News

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Here is the original post:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology