Archive for the ‘Machine Learning’ Category

Meta Platforms scoops up AI networking chip team from Graphcore – The Economic Times

Meta Platforms Inc has hired an Oslo-based team that until late last year was building artificial intelligence networking technology at British chip unicorn Graphcore. A Meta spokesperson confirmed the hirings in response to a request for comment, after Reuters identified 10 people whose LinkedIn profiles said they worked at Graphcore until December 2022 or January 2023 and subsequently joined Meta in February or March of this year.

"We recently welcomed a number of highly-specialized engineers in Oslo to our infrastructure team at Meta. They bring deep expertise in the design and development of supercomputing systems to support AI and machine learning at scale in Meta's data centers," said Jon Carvill, the Meta spokesperson.

On top of that, it is now rushing to join competitors like Microsoft Corp and Alphabet Inc's Google in releasing generative AI products capable of creating human-like writing, art and other content, which investors see as the next big growth area for tech companies.

Carvill declined to say what they would be working on at Meta.

Meta already has an in-house unit designing several kinds of chips aimed at speeding up and maximizing efficiency for its AI work, including a network chip that performs a sort of air traffic control function for servers, two sources told Reuters.

A new category of network chip has emerged to help keep data moving smoothly within those computing clusters. Nvidia, AMD and Intel Corp all make such network chips.

Graphcore, one of the UK's most valuable tech startups, once was seen by investors like Microsoft and venture capital firm Sequoia as a promising potential challenger to Nvidia's commanding lead in the market for AI chip systems.

However, it faced a setback in 2020 when Microsoft scrapped an early deal to buy Graphcore's chips for its Azure cloud computing platform, according to a report by UK newspaper The Times. Microsoft instead used Nvidia's GPUs to build the massive infrastructure powering ChatGPT developer OpenAI, which Microsoft also backs.

Sequoia has since written down its investment in Graphcore to zero, although it remains on the company's board, according to a source familiar with the relationship. The write-down was first reported by Insider in October.

The Graphcore spokesperson confirmed the setbacks, but said the company was "perfectly positioned" to take advantage of accelerating commercial adoption of AI.

Graphcore was last valued at $2.8 billion after raising $222 million in its most recent investment round in 2020.

See the original post:
Meta Platforms scoops up AI networking chip team from Graphcore - The Economic Times

How to get going with machine learning – Robotics and Automation News

We can see everyone around us talking about machine learning and artificial intelligence. But is the hype of machine learning objective? Lets dive into the details of machine learning and how we can start it from scratch.

Machine learning is a technological method through which we teach our computers and electronic gadgets how to provide accurate answers. Whenever data is fed into the system, it acts in a defined way to find precise answers to those questions asked.

For example, questions such as: What is the taste of avocado?, What are the things to consider for buying an old car?, How do I drive safely on reload?, and so on.

But using machine language, the computer is trained to give precise answers even without input from developers. In other words, machine language is a sophisticated form of language in which computers are trained to provide correct answers to complicated questions.

Furthermore, they are trained to learn more, distinguish confusing questions, and provide satisfactory answers.

Machine learning and AI is the future. Therefore, people who can learn skills and become proficient will become the first in line to reap the profits. We have companies that offer machine learning services to augment your business.

In other words, to get unreal advantages, we must engage with these services for the exponential growth of our business.

Initially, the developers do a massive number of training and modeling. Other crucial things are also done by the developers for machine language development. Additionally, vast amounts of data are used to provide precise results and effectively reduce the decision taking time.

Here are the simple steps that can get you started with machine learning.

Make up your mind and choose a tool in which you want to master machine learning development.

Always look for the best language in terms of practicality and its acceptability on multiple platforms.

As we know, Machine learning is a process that involves a rigorous process of modeling and training. Therefore we must practice the given below bullet points.

To take the most advantage, create a delicate and lucid portfolio of yours to demonstrate your learned skills to the world. Keep in mind the below-mentioned bullet points too.

When we apply a precise algorithm to a data set, the output we get is called a Model. In other words, it is also known as Hypothesis.

In technical terms, a feature is a quantifiable property that defines the characteristics of a process in machine learning. One of the crucial characteristics of it is to recognize and classify algorithms. It is used as input into a model.

For example, to recognize a fruit, it uses features such as smell, taste, size, color, and so on. The element is vital in distinguishing the target or asked query using several characteristics.

The highest level of value or variable created by the machine learning model is called Target.

For example, In the previous set, we measured fruits. Each label has a specific fruit such as orange, banana, apple, pineapple, and so on.

In machine learning, Training is a term used for getting used to all the values and biases of our target examples. Under supervision during the learning process, many experiments are done to build a machine learning algorithm to reach the minimum loss going the correct output.

When a model is accomplished, we can set a variety of inputs that will give us the expected results as output. Always be careful and look that system is performing accurately on unseen data. Then only we can say it is a successful operation.

After preparing our model, we can input a set of data for which it will generate a predicted output or label. However, verifying its performance on new, untested data is essential before concluding that the machine is performing well.

As machine learning continues to increase in significance to enterprise operations and AI becomes more sensible in corporation settings, the machine learning platform wars will accentuate handiest.

Persisted research into deep studying and ai is increasingly targeted at developing different general applications. Cutting-edge AI models require sizeable training to produce an algorithm that is particularly optimized to perform one venture.

But some researchers are exploring approaches to make fashions greater bendy and are searching for techniques that allow a device to use context discovered from one project to future, specific tasks.

You might also like

Read the original:
How to get going with machine learning - Robotics and Automation News

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Here is the original post:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology

ASCRS 2023: Predicting vision outcomes in cataract surgery with … – Optometry Times

Mark Packer, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on machine learning and predicting vision outcomes after cataract surgery at the 2023 ASCRS annual meeting in San Diego

Editors note:This transcript has been edited for clarity.

Sheryl Stevenson:

We're joined by Dr. Mark Packer, who will be presenting at this year's ASCRS. Hello to Dr. Packard. Great to see you again.

Mark Packer, MD:

Good to see you, Sheryl.

Stevenson:

Sure, tell us a little bit about your talk about machine learning, and visual, predicting vision outcomes after cataract surgery.

Packer:

Sure, well, as we know, humans tend to be fallible, and even though surgeons don't like to admit it, they have been prone to make errors from time to time. And you know, one of the errors that we make is that we always extrapolate from our most recent experience. So if I just had a patient who was very unhappy with a multifocal IOL, all of a sudden, I'm going to be a lot more cautious with my next patient, and maybe the one after that, too.

And, the reverse can happen as well. If I just had a patient who was absolutely thrilled with their toric multifocal, and they never have to wear glasses again, and they're leaving for Hawaii in the morning, you know, getting a full makeover, I'm going to think, wow, that was the best thing I ever did. And now all of a sudden, everyone looks like a candidate. and even for someone like me, who has been doing multifocal IOL for longer than I care to admit, you know, this can still pose a problem. That's just human nature.

And, so what we're attempting to do with the oculotics program is to bring a little objectivity into the mix. Now, of course, we already do that, when we talked about IOL power calculations, we, we leave that up to algorithms and let them do the work. One of the things that we've been able to do with oculotics is actually improve upon the way that power calculations are done. So rather than just looking at the Dioptric power of a lens, for example, we're actually looking at the real optical properties of the lens, the modulation transfer function, in order to help correlate that with what a patient desires in terms of spectacle independence.

But the real brainchild here is the idea of incorporating patient feedback after surgery into the decision making process. So part of this is actually to give our patients and app that they can use to then provide feedback on their level of satisfaction, essentially, by filling out the VFQ-25, which is a simply, a 25 item questionnaire that was developed in the 1990s by RAND Corporation, to look at visual function and how satisfied people are with their vision, whether they have to worry about it, and how they feel about their vision, that sort of thing, whether they can drive at night comfortably and all that.

So if we can incorporate that feedback into our decision making, now instead of my going into the next room, you know, with fresh in my mind just what happened today, actually, I'll be incorporating the knowledge of every patient that I've operated on since I started using this system, and how they fared with these different IOLs.

So the machine learning algorithm can actually take this patient feedback and put that together with the preoperative characteristics such as, you know, personal items, such as hobbies, what they do for recreation, what their employment is, what kind of visual demands they have. And also anatomic factors, you know, the axial length, anterior chamber depth, corneal curvature, all of that, put that all together, and then we can begin to match inter ocular lens selection, actually to patients based not only on their biometry, but also on their personal characteristics, and how they actually felt about the results of their surgery.

So that's how I think machine learning can help us, and hopefully bring surgeons up to speed with premium IOLs more quickly because, you know, it's taken some of us years and years to gain the experience to really become confident in selecting which patients are right for premium lenses, particularly multifocal extended depth of focus lenses and that sort of thing where, you know, there are visual side effects, and there are limitations, but there also are great advantages. And so hopefully using machine learning can bring young surgeons up more quickly increase their confidence and allow them to increase the rate of adoption among their patients for these premium lenses.

The rest is here:
ASCRS 2023: Predicting vision outcomes in cataract surgery with ... - Optometry Times

How AI and Machine Learning is Transforming the Online Gaming … – Play3r

Are you an avid online gamer? Do you find yourself craving a more immersive experience every time you jump into playing your favorite slot games or any game at that? If so, you may be interested to learn about how advances in AI and machine learning are transforming the gaming experience.

In this blog post, we will explore the ways that artificial intelligence and machine learning technologies are making online gaming smoother and more thrilling than ever before. Well look at how these technologies have been used to enhance graphics, user interfaces, and in-game dynamics all of which can drastically improve your gameplay.

Whether your favorite pastime is first-person shooters or real-time strategy games, lets delve into everything AI has to offer gamers!

As the online gaming industry continues to grow and evolve, AI and machine learning have become increasingly important tools for developers. These technologies can change the way we experience our favorite games, from providing more realistic and unpredictable opponents to personalized gameplay.

Through the use of AI and machine learning, game developers can analyze vast amounts of data, allowing them to create better-balanced and more engaging gaming experiences.

Additionally, these tools can help identify and prevent cheating, making online gaming fairer and more enjoyable for all. As the gaming industry moves forward, its clear that AI and machine learning will play an important role in shaping the future of the industry.

The world of online gaming is constantly evolving and with the introduction of AI and machine learning, it just keeps getting better. These technologies have revolutionized the gaming industry and brought about countless benefits for both players and developers.

AI algorithms help create more realistic gameplay and sophisticated opponents, while machine learning helps predict player behavior and preferences, leading to a more personalized gaming experience.

Additionally, AI can help game developers optimize their games for performance and eliminate bugs faster than ever before. In short, the benefits of using AI and machine learning in online gaming are diverse and far-reaching, making it an exciting area to watch for future developments.

Developing AI and machine learning technologies can be incredibly challenging for software developers. One of the biggest obstacles faced by developers is finding the right data to train their algorithms effectively.

In addition to this, there is also a lot of complexity involved in designing AI systems that can learn from data with minimal human intervention. Moreover, creating machine learning models that can accurately predict and analyze data in real time requires a sophisticated understanding of various statistical techniques and programming languages.

With these challenges in mind, its no wonder that many developers in this field feel overwhelmed. However, with the right tools and resources, developers can overcome these obstacles and continue advancing the exciting field of AI and machine learning.

The world of gaming has evolved significantly in recent years, and one major factor in this transformation is the integration of AI and machine learning into popular online games. From first-person shooters to strategy and adventure games, players have been enjoying a more immersive experience thanks to the inclusion of smarter, more complex non-player characters (NPCs) and advanced game optimization.

For example, in the game AI Dungeon, players can enter any storyline, and the AI generates a unique adventure based on their input. Similarly, the popular game League of Legends uses machine learning to optimize matchmaking, ensuring players are pitted against opponents of similar skill levels.

With AI and machine learning continually improving, the future of online gaming promises to be even more exciting and engrossing.

Artificial intelligence and machine learning have drastically transformed the gaming industry in recent years. These technologies can analyze vast amounts of data, predict outcomes, and make recommendations for players to improve their overall gameplay experience. AI can also assist developers in creating more immersive worlds, where virtual characters have reactive behaviors that mimic real-life behaviors.

Machine learning algorithms, on the other hand, can help determine a players skill level and preferences, adapting gameplay accordingly. Many gamers have already seen the benefits of these technologies, with smarter NPCs, more adaptive environments, and improved matchmaking systems.

As AI and machine learning continue to evolve, the gaming experience will only become more enhanced and personalized, creating an even more immersive world for players to explore.

AI and machine learning-based games have become increasingly popular in recent years, offering players a unique and immersive gaming experience. But how can you make the most of these cutting-edge titles?

Firstly, take the time to understand the game mechanics and the AIs decision-making process. This can help you anticipate actions and develop strategies to stay ahead of the curve. Additionally, be sure to give feedback to the developers, as this can help them improve the games machine-learning algorithms and provide a better experience for everyone.

Lastly, dont be afraid to experiment and try out different approaches to see what works best. With these tips, youll be well on your way to dominating the world of AI and machine learning-based gaming.

Online gaming experiences have been revolutionized by AI and Machine Learning technology. The ability to offer players intelligent, personalized gaming experiences that feel unique and engaging. Not only is this creating games that boost user retention, but it is also opening up exciting possibilities for multiplayer gaming.

Additionally, developers are increasingly leaning towards AI and ML to create more immersive worlds for gamers to explore. Despite challenges in implementation, the advancements of AI and Machine Learning are offering a wide range of captivating new experiences for online gamers from improved graphics to real-time learning obstacles making them an important component in crafting better gameplay experiences than ever before.

As players continue to enjoy the ever-evolving exciting world of online gaming, they must keep up with the latest trends related to AI and Machine Learning technology to make sure they are getting the most out of their experience.

Like Loading...

Go here to see the original:
How AI and Machine Learning is Transforming the Online Gaming ... - Play3r