Archive for the ‘Machine Learning’ Category

How Machine Learning Plays a Key Role in Diagnosing Type 2 … – Diabetes In Control

Type 2 diabetes is a chronic disease that affects millions of people around the world, leading to long-term health complications such as heart disease, nerve damage, and kidney failure. The early diagnosis of type 2 diabetes is critical in order to prevent these complications, and machine learning is helping to revolutionize the way this disease is diagnosed.

Machine learning algorithms use patterns in data to make predictions and decisions, and this same capability can be applied to the analysis of medical data in order to improve the diagnosis of type 2 diabetes. One of the key ways that machine learning is improving diabetes diagnosis is through the use of predictive algorithms. These algorithms can use data from patient histories, such as age, BMI, blood pressure, and blood glucose levels, to predict the likelihood of a patient developing type 2 diabetes. This can help healthcare providers to identify patients who are at high risk of developing the disease and take early action to prevent it.

Another way that machine learning is improving diabetes diagnosis is through the use of advanced imaging techniques. Machine learning algorithms can be used to analyze images of the retina and identify early signs of diabetic retinopathy, a condition that often develops in people with type 2 diabetes and can cause vision loss. In addition, machine learning algorithms can be used to analyze images of the pancreas and identify early signs of insulin resistance, which is a hallmark of type 2 diabetes.

Machine learning algorithms can also be used to analyze large datasets from electronic health records in order to identify patterns and markers that are associated with type 2 diabetes. For example, machine learning algorithms can be used to analyze the medical histories of patients and identify risk factors such as family history, age, and lifestyle habits that may increase the likelihood of developing type 2 diabetes. By analyzing large datasets in this way, machine learning algorithms can help healthcare providers to identify patients who are at high risk of developing the disease, and take early action to prevent it.

One of the key benefits of machine learning in diabetes diagnosis is the ability to quickly and accurately analyze large amounts of data. Machine learning algorithms can process data much faster and more accurately than humans, and this can help healthcare providers to make more informed decisions about patient care. Additionally, machine learning algorithms can be trained to recognize patterns and markers that are specific to type 2 diabetes, which can improve the accuracy of diagnoses and reduce the number of false positives.

In conclusion, machine learning is playing a critical role in the diagnosis of type 2 diabetes. With its ability to analyze large datasets, identify patterns and markers associated with the disease, and predict the likelihood of a patient developing type 2 diabetes, machine learning is helping to revolutionize the way this disease is diagnosed. By improving the accuracy and speed of diagnoses, machine learning is helping to ensure that patients receive the care they need as early as possible, and prevent the long-term health complications associated with this disease.

References:

1. Machine learning in healthcare: past, present and future R. Andrew Shah, M.D., David W. Orellana, M.D., M.S. (2018)

2. Using Machine Learning for Early Diagnosis of Type 2 Diabetes Ahmed Al-Emadi, M.D., (2020)

3. The impact of machine learning on healthcare: A review of the literature Joshua D. Bloom, M.D., Ph.D., et al. (2019)

*This article was produced with the assistance of artificial intelligence. Please always check and confirm with your own sources, and always consult with your healthcare professional when seeking medical treatment

More:
How Machine Learning Plays a Key Role in Diagnosing Type 2 ... - Diabetes In Control

10 TensorFlow Courses to Get Started with AI & Machine Learning – Fordham Ram

Looking for ways to improve your TensorFlow machine learning skills?

As TensorFlow gains popularity, it has become imperative for aspiring data scientists and machine learning engineers to learn this open-source software library for dataflow and differentiable programming. However, finding the rightTensorFlow course that suits your needs and budget can be tricky.

In this article, we have rounded up the top 10 online free and paid TensorFlow courses that will help you master this powerful machine learning framework.

Lets dive into TensorFlow and see which of our top 10 picks will help you take your machine-learning skills to the next level.

This course from Udacity is available free of cost. The course has 4 modules, each teaching you how to use models from TF Lite in different applications. This course will teach you everything you need to know to use TF Lite for Internet of Things devices, Raspberry Pi, and more.

The course starts with an overview of TensorFlow Lite, then moves on to:

This course is ideal for people proficient in Python, iOS, Swift, or Linux.

Duration: 2 months

Price: Free

Certificate of Completion: No

With over 91.534 enrolled students and thousands of positive reviews, this Udemy course is one of the best-selling TensorFlow courses. This course was created by Jos Portilla. She is famous for her record-breaking Udemy course, The Complete Python 3 Bootcamp, with over 1.5 million students enrolled in it.

As you progress through this course, you will learn to use TensorFlow for various tasks, including image classification with Convolutional Neural Networks (CNN). Youll also learn how to design your own neural network from scratch and analyze time series.

Overall, this course is excellent for learning TensorFlow fundamentals using Python. The course covers the basics of TensorFlow and more and does not require any prior knowledge of Machine Learning.

Duration: 14 hrs

Price: Paid

Certificate of Completion: Yes

TensorFlow: Intro to TensorFlow for Deep Learning is third in our list of free TensorFlow courses one should definitely check out. This course includes a total of 10 modules. In the first part of the course, Dr. Sebastian Thrun, co-founder of Udacity, gives an interview about machine learning and Udacity.

Initially, youll learn about the MNIST fashion dataset. Then, as you progress through the course, youll learn how to employ a DNN model that categorizes pictures using the MNIST fashion dataset.

The course covers other vital subjects, including transfer learning and forecasting time series.

This course is ideal for students who are fluent in Python and have some knowledge of linear algebra.

Duration: 2 months

Price: Free

Certificate of Completion: No

This course from Coursera is an excellent way to learn about the basics of TensorFlow. In this program, youll learn how to design and train neural networks and explore fascinating new AI and machine learning areas.

As you train a network to recognize real-world images, youll also learn how convolutions could be used to boost a networks speed. Additionally, youll train a neural network to recognize human speech with NLP systems.

Even though auditing the courses is free, certification will cost you. However, if you complete the course within 7 days of enrolling, you can claim a full refund and get a certificate.

This course is for those who already have some prior experience.

Duration: 2 months

Price: free

Certificate of Completion: Yes

This is a free Coursera course on TensorFlow introduction for AI. To get started, you must first click on Enroll for Free and sign up. Then, youll be prompted to select your preferred subscription period in a new window.

There will be a button that says Audit the Course.. By clicking on the button, it will allow you to access the course for free.

As part of the first week of this course, Andrew Ng, the instructor, will provide a brief overview. Later, there will be a discussion about what the course is all about.

The Fashion MNIST Dataset is introduced in the second Week as a context for the fundamentals of computer vision. The purpose of this section is for you to put your knowledge into practice by writing your own computer vision neural network (CVNN) code.

Those with some Python experience will benefit the most from this course.

Duration: 4 months

Price: Free

Certificate of Completion: Yes

For those seeking TensorFlow Developer Certification in 2023, TensorFlow Developer Certificate in 2023: Zero to Mastery is an excellent choice since it is comprehensive, in-depth, and top-quality.

In this online course, youll learn everything you need to know to advance from knowing zero about TensorFlow to being a fully certified member of Googles TensorFlow Certification Network, all under the guidance of Daniel Bourke, a TensorFlow Accredited Professional.

The course will involve completing exercises, carrying out experiments, and designing models for machine learning and applications under the guidance of TensorFlow Certified Expert Daniel Bourke.

By enrolling in this 64-hour course, you will learn everything you need to know about designing cutting-edge deep learning solutions and passing the TensorFlow Developer certification exam.

This course is a right fit for anyone wanting to advance from TensorFlow novice to Google Certified Professional.

Duration: 64 hrs

Price: Paid

Certificate of Completion: Yes

This is yet another high-quality course that is free to audit. This course features a five-week study schedule.

This online course will teach you how to use Tensorflow to create models for deep learning from start to finish. Youll learn via engaging in hands-on programming sessions led by an experienced instructor, where you can immediately put what youve learned into practice.

The third and fourth weeks focus on model validation, normalization, The Hub Modules for Tensorflow, etc. And the final Week is dedicated to a Project for Capstone. Students in this course will be exposed to a great deal of hands-on learning and work.

This course is ideal for those who are already familiar with Python and understand the Machine learning fundamentals.

Duration: 26 hrs

Price: Free

Certificate of Completion: No

This hands-on course introduces you to Googles cutting-edge Deep Learning framework, TensorFlow, and shows you how to use it.

This program is geared toward learners who are in a bit of a rush to get to full speed. However, it also provides in-depth segments for those interested in learning more about the theory behind things like loss functions and gradient descent methods, etc.

This course will teach you how to build Python recommendation systems with TensorFlow. As far as the course goes, it was created by Lazy Programmer, one of the best instructors on Udemy for machine learning.

Furthermore, you will create an app that predicts the stock market using Python. If you prefer hands-on learning through projects, this TensorFlow course is ideal for you.

This is a fantastic resource for those new to programming and just getting their feet wet in the fields of Data Science and Machine Learning.

Duration: 23.5 hrs

Price: Paid

Certificate of Completion: Yes

This resource is excellent for learning TensorFlow and machine learning on Google Cloud. The course offers an advanced TensorFlow environment for building robust and complex deep models using deep learning.

People who are just getting started will find this course one of the most promising. It has five modules that will teach you a lot about TensorFlow and machine learning.

A course like this is perfect for those who are just starting.

Duration: 4 months

Price: Free

Certificate of Completion: Paid Certificate

This course, developed by Hadelin de Ponteves, the Ligency I Team, and Luka Anicin, will introduce you to neural networks and TensorFlow in less than 13 hours. The course provides a more basic introduction to TensorFlow and Keras than its counterparts.

In this course, youll begin with Python syntax fundamentals, then proceed to program neural networks using TensorFlow and Googles Machine Learning framework.

A major advantage of this course is using Colab for labs and assignments. The advantage of Colab is that students have less chance to make mistakes, plus you get an excellent, shareable online portfolio of your work.

This course is intended for programmers who are already comfortable working with Python.

Duration: 13 hrs

Price: Paid

Certificate of Completion: Yes

In conclusion, weve discussed 10 online free and paid TensorFlow courses that can help you learn and improve your skills in this powerful machine-learning framework. Weve seen that there are options available for beginners and more advanced users and that some courses offer hands-on projects and real-world applications.

If youre interested in taking your TensorFlow skills to the next level, we encourage you to explore some of the courses weve covered in this post. Whether youre looking for a free introduction or a more in-depth paid course, theres something for everyone.

So dont wait enroll in one of these incredibly helpful courses today and start learning TensorFlow!

And as always, wed love to hear your thoughts and experiences in the comments below. What other TensorFlow courseshave you tried? Let us know!

Online TensorFlow courses can be suitable for beginners, but some prior knowledge of machine learning concepts can be helpful. Choosing a course that aligns with your skill level and offers clear explanations of the foundational concepts is important. Some courses may assume prior knowledge of Python programming or linear algebra, so its important to research the course requirements before enrolling.

The duration of a typical TensorFlow course can vary widely, ranging from a few weeks to several months, depending on the level of depth and complexity. The amount of time you should dedicate to learning each Week will depend on the TensorFlow course and your schedule, but most courses recommend several hours of study time per Week to make meaningful progress.

Some best practices for learning TensorFlow online include setting clear learning objectives, taking comprehensive notes, practicing coding exercises regularly, seeking help from online forums or community groups, and working on real-world projects to apply your knowledge. To ensure youre progressing and mastering the concepts, track your progress, regularly test your understanding of the material, and seek feedback from peers or instructors.

Prerequisites for online TensorFlow courses may vary, but basic programming skills and familiarity with Python are often required. A solid understanding of linear algebra and calculus can help understand the underlying mathematical concepts. Some courses may also require hardware, such as a powerful graphics processing unit (GPU), for training large-scale deep learning models. Its important to carefully review the course requirements before enrolling.

Some online TensorFlow courses offer certifications upon completion, but there are no official degrees in TensorFlow. Earning a certification can demonstrate your knowledge and proficiency in the framework, which can help advance your career in machine learning or data science. However, its important to supplement your knowledge with real-world projects and practical experience to be successful in the field.

Link:
10 TensorFlow Courses to Get Started with AI & Machine Learning - Fordham Ram

An introduction to generative AI with Swami Sivasubramanian – All Things Distributed

In the last few months, weve seen an explosion of interest in generative AI and the underlying technologies that make it possible. It has pervaded the collective consciousness for many, spurring discussions from board rooms to parent-teacher meetings. Consumers are using it, and businesses are trying to figure out how to harness its potential. But it didnt come out of nowhere machine learning research goes back decades. In fact, machine learning is something that weve done well at Amazon for a very long time. Its used for personalization on the Amazon retail site, its used to control robotics in our fulfillment centers, its used by Alexa to improve intent recognition and speech synthesis. Machine learning is in Amazons DNA.

To get to where we are, its taken a few key advances. First, was the cloud. This is the keystone that provided the massive amounts of compute and data that are necessary for deep learning. Next, were neural nets that could understand and learn from patterns. This unlocked complex algorithms, like the ones used for image recognition. Finally, the introduction of transformers. Unlike RNNs, which process inputs sequentially, transformers can process multiple sequences in parallel, which drastically speeds up training times and allows for the creation of larger, more accurate models that can understand human knowledge, and do things like write poems, even debug code.

I recently sat down with an old friend of mine, Swami Sivasubramanian, who leads database, analytics and machine learning services at AWS. He played a major role in building the original Dynamo and later bringing that NoSQL technology to the world through Amazon DynamoDB. During our conversation I learned a lot about the broad landscape of generative AI, what were doing at Amazon to make large language and foundation models more accessible, and last, but not least, how custom silicon can help to bring down costs, speed up training, and increase energy efficiency.

We are still in the early days, but as Swami says, large language and foundation models are going to become a core part of every application in the coming years. Im excited to see how builders use this technology to innovate and solve hard problems.

To think, it was more than 17 years ago, on his first day, that I gave Swami two simple tasks: 1/ help build a database that meets the scale and needs of Amazon; 2/ re-examine the data strategy for the company. He says it was an ambitious first meeting. But I think hes done a wonderful job.

If youd like to read more about what Swamis teams have built, you can read more here. The entire transcript of our conversation is available below. Now, as always, go build!

This transcript has been lightly edited for flow and readability.

***

Werner Vogels: Swami, we go back a long time. Do you remember your first day at Amazon?

Swami Sivasubramanian: I still remember… it wasnt very common for PhD students to join Amazon at that time, because we were known as a retailer or an ecommerce site.

WV: We were building things and thats quite a departure for an academic. Definitely for a PhD student. To go from thinking, to actually, how do I build?

So you brought DynamoDB to the world, and quite a few other databases since then. But now, under your purview theres also AI and machine learning. So tell me, what does your world of AI look like?

SS: After building a bunch of these databases and analytic services, I got fascinated by AI because literally, AI and machine learning puts data to work.

If you look at machine learning technology itself, broadly, its not necessarily new. In fact, some of the first papers on deep learning were written like 30 years ago. But even in those papers, they explicitly called out for it to get large scale adoption, it required a massive amount of compute and a massive amount of data to actually succeed. And thats what cloud got us to to actually unlock the power of deep learning technologies. Which led me to this is like 6 or 7 years ago to start the machine learning organization, because we wanted to take machine learning, especially deep learning style technologies, from the hands of scientists to everyday developers.

WV: If you think about the early days of Amazon (the retailer), with similarities and recommendations and things like that, were they the same algorithms that were seeing used today? Thats a long time ago almost 20 years.

SS: Machine learning has really gone through huge growth in the complexity of the algorithms and the applicability of use cases. Early on the algorithms were a lot simpler, like linear algorithms or gradient boosting.

The last decade, it was all around deep learning, which was essentially a step up in the ability for neural nets to actually understand and learn from the patterns, which is effectively what all the image based or image processing algorithms come from. And then also, personalization with different kinds of neural nets and so forth. And thats what led to the invention of Alexa, which has a remarkable accuracy compared to others. The neural nets and deep learning has really been a step up. And the next big step up is what is happening today in machine learning.

WV: So a lot of the talk these days is around generative AI, large language models, foundation models. Tell me, why is that different from, lets say, the more task-based, like fission algorithms and things like that?

SS: If you take a step back and look at all these foundation models, large language models… these are big models, which are trained with hundreds of millions of parameters, if not billions. A parameter, just to give context, is like an internal variable, where the ML algorithm must learn from its data set. Now to give a sense… what is this big thing suddenly that has happened?

A few things. One, transformers have been a big change. A transformer is a kind of a neural net technology that is remarkably scalable than previous versions like RNNs or various others. So what does this mean? Why did this suddenly lead to all this transformation? Because it is actually scalable and you can train them a lot faster, and now you can throw a lot of hardware and a lot of data [at them]. Now that means, I can actually crawl the entire world wide web and actually feed it into these kind of algorithms and start building models that can actually understand human knowledge.

WV: So the task-based models that we had before and that we were already really good at could you build them based on these foundation models? Task specific models, do we still need them?

SS: The way to think about it is that the need for task-based specific models are not going away. But what essentially is, is how we go about building them. You still need a model to translate from one language to another or to generate code and so forth. But how easy now you can build them is essentially a big change, because with foundation models, which are the entire corpus of knowledge… thats a huge amount of data. Now, it is simply a matter of actually building on top of this and fine tuning with specific examples.

Think about if youre running a recruiting firm, as an example, and you want to ingest all your resumes and store it in a format that is standard for you to search an index on. Instead of building a custom NLP model to do all that, now using foundation models with a few examples of an input resume in this format and here is the output resume. Now you can even fine tune these models by just giving a few specific examples. And then you essentially are good to go.

WV: So in the past, most of the work went into probably labeling the data. I mean, and that was also the hardest part because that drives the accuracy.

SS: Exactly.

WV: So in this particular case, with these foundation models, labeling is no longer needed?

SS: Essentially. I mean, yes and no. As always with these things there is a nuance. But a majority of what makes these large scale models remarkable, is they actually can be trained on a lot of unlabeled data. You actually go through what I call a pre-training phase, which is essentially you collect data sets from, lets say the world wide Web, like common crawl data or code data and various other data sets, Wikipedia, whatnot. And then actually, you dont even label them, you kind of feed them as it is. But you have to, of course, go through a sanitization step in terms of making sure you cleanse data from PII, or actually all other stuff for like negative things or hate speech and whatnot. Then you actually start training on a large number of hardware clusters. Because these models, to train them can take tens of millions of dollars to actually go through that training. Finally, you get a notion of a model, and then you go through the next step of what is called inference.

WV: Lets take object detection in video. That would be a smaller model than what we see now with the foundation models. Whats the cost of running a model like that? Because now, these models with hundreds of billions of parameters are very large.

SS: Yeah, thats a great question, because there is so much talk already happening around training these models, but very little talk on the cost of running these models to make predictions, which is inference. Its a signal that very few people are actually deploying it at runtime for actual production. But once they actually deploy in production, they will realize, oh no, these models are very, very expensive to run. And that is where a few important techniques actually really come into play. So one, once you build these large models, to run them in production, you need to do a few things to make them affordable to run at scale, and run in an economical fashion. Ill hit some of them. One is what we call quantization. The other one is what I call a distillation, which is that you have these large teacher models, and even though they are trained on hundreds of billions of parameters, they are distilled to a smaller fine-grain model. And speaking in a super abstract term, but that is the essence of these models.

WV: So we do build… we do have custom hardware to help out with this. Normally this is all GPU-based, which are expensive energy hungry beasts. Tell us what we can do with custom silicon hatt sort of makes it so much cheaper and both in terms of cost as well as, lets say, your carbon footprint.

SS: When it comes to custom silicon, as mentioned, the cost is becoming a big issue in these foundation models, because they are very very expensive to train and very expensive, also, to run at scale. You can actually build a playground and test your chat bot at low scale and it may not be that big a deal. But once you start deploying at scale as part of your core business operation, these things add up.

In AWS, we did invest in our custom silicons for training with Tranium and with Inferentia with inference. And all these things are ways for us to actually understand the essence of which operators are making, or are involved in making, these prediction decisions, and optimizing them at the core silicon level and software stack level.

WV: If cost is also a reflection of energy used, because in essence thats what youre paying for, you can also see that they are, from a sustainability point of view, much more important than running it on general purpose GPUs.

WV: So theres a lot of public interest in this recently. And it feels like hype. Is this something where we can see that this is a real foundation for future application development?

SS: First of all, we are living in very exciting times with machine learning. I have probably said this now every year, but this year it is even more special, because these large language models and foundation models truly can enable so many use cases where people dont have to staff separate teams to go build task specific models. The speed of ML model development will really actually increase. But you wont get to that end state that you want in the next coming years unless we actually make these models more accessible to everybody. This is what we did with Sagemaker early on with machine learning, and thats what we need to do with Bedrock and all its applications as well.

But we do think that while the hype cycle will subside, like with any technology, but these are going to become a core part of every application in the coming years. And they will be done in a grounded way, but in a responsible fashion too, because there is a lot more stuff that people need to think through in a generative AI context. What kind of data did it learn from, to actually, what response does it generate? How truthful it is as well? This is the stuff we are excited to actually help our customers [with].

WV: So when you say that this is the most exciting time in machine learning what are you going to say next year?

More:
An introduction to generative AI with Swami Sivasubramanian - All Things Distributed

Having one of these in-demand tech skills can help boost your pay by nearly $40,000here’s how – CNBC

The only thing standing between you and a pay bump of almost $40,000 could be a certificate in machine learning.

U.S. workers with advanced tech skills earn about 49% more than workers who don't use tech skills in their jobs, according to newly released research from Gallup and Amazon Web Services (AWS), which surveyed more than 3,000 U.S. workers and 1,170 U.S. employers in August 2022. This translates into average individual gains of $36,552 per year.

As the development and adoption of new technologies continue at a breakneck pace, the need for digitally savvy workers is "greater than ever," the report notes.

Newer technologies including cryptocurrency, the metaverse and artificial intelligence are becoming skills requirements for jobs in several industries, including finance, manufacturing and health care, with nearly two-thirds of employers saying it's highly likely" these inventions will become a core part of their business in the near future.

Those who consider digital upskilling stand to reap major benefits from this trend: At least four in 10 U.S. workers say learning new digital skills helped them boost their pay (43%), work more efficiently (42%), or get promoted (40%).

Here are the 10 tech skills employers say are "extremely likely" to become standard parts of doing business and the most in-demand skills they are hiring for according to AWS and Gallup:

At the top of the list is 5G, or the fifth generation of wireless technology, which cellphone companies began using in 2019. 5G technology can be used to make data transmission more efficient across industries: In health care, for example, large files can be transmitted more quickly between doctors and hospitals.

Generative AI tools, in particular, have become more popular in the workplace since the launch of ChatGPT in late 2022, says Jay Shankar, vice president of global talent acquisition at Amazon Web Services.

"It's a super important skillset employers are looking for, across all industries," she adds. "AI is practically everywhere now and to me, if there's one technical skill you want to learn, that's the area to focus on."

Many of the jobs hiring for these technical skills, such as machine learning engineer and full stack developer, offer competitive salaries of $100,000 per year or higher.

The rise of generative AI tools has elicited increased demand for prompt engineers, who test prompts and build user guides to improve chatbots' responses,Business Insiderreports. Some of these jobs, which don't require an engineering or coding background, can pay as much as $335,000.

If you're looking to enhance your generative AI skills, there are several certification and training courses online, from the University of Michigan, Coursera and other e-learning platforms. For other technical skills, including machine learning and data analytics, AWS offers free online courses.

While some experts have warned that certain technologies, like AI and robotics, could replace millions of jobs in the next 10 years, Shankar says such innovations should be used to help workers be better at their jobs not take them over completely. "It's enabling us to accomplish things faster, and evolve many roles," she adds. "But I don't think AI, for example, will ever fully replace humans."

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Check out:

ChatGPT is the hottest new job skill that can help you get hired, according to HR experts

The No. 1 mistake job seekers make, according to the CEO of ZipRecruiterand it's entirely avoidable

10 in-demand remote jobs paying $100,000 or more that companies are hiring for now

See the original post here:
Having one of these in-demand tech skills can help boost your pay by nearly $40,000here's how - CNBC

MVTec further expands HALCON functionality with new deep … – Robotics Tomorrow

New version 23.05 extends HALCON's comprehensive software libraryNew Deep Counting feature for counting large quantitiesRelease on May 23, 2023

Munich, April 13, 2023 - MVTec Software GmbH (www.mvtec.com), a leading international software manufacturer for machine vision worldwide, will launch version 23.05 of the standard machine vision software HALCON on May 23, 2023. The focus of the new release is deep learning methods. The main feature here is Deep Counting, a deep-learning-based method that can robustly count large quantities of objects. In addition, improvements for the training of the deep learning technologies 3D Gripping Point Detection as well as Deep OCR have been integrated into the new HALCON version. With HALCON 23.05, it is now possible to further optimize the underlying deep learning networks, which are already pre-trained on industry-related images, for the user's own application. This allows even more robust recognition rates for Deep OCR applications as well as an even more reliable detection of suitable gripping surfaces for applications using 3D Gripping Point Detection technology. In addition, there are many other helpful improvements, such as the fact that external code can now be integrated into HALCON more easily.

Training for Deep OCRDeep OCR reads texts in a very robust way, even regardless of their orientation and font. For this purpose, the technology first detects the relevant text within the image and then reads it. With HALCON 23.05, it's now also possible to fine-tune the text detection by retraining the pretrained network with application-specific images. This provides even more robust results and opens new application possibilities. For example: the detection of text with arbitrary printing type or unseen character types as well as an improved readability in noisy, low contrast environments.

Training for 3D Gripping Point Detection3D Gripping Point Detection can be used to robustly detect surfaces on any object that is suitable for gripping with suction. In HALCON 23.05 there is now the possibility to retrain the pretrained model with own application-specific image data. The grippable surfaces are thus recognized even more robustly. The necessary labeling is done easily and efficiently via the MVTec Deep Learning Tool.

Easy Extensions InterfaceWith the help of HALCON extension packages the integration of external programming languages is possible. The advantage for customers: Functionalities that go beyond pure image processing can thus be covered by HALCON. In HALCON 23.05, the integration of external code has become much easier with the Easy Extensions Interface. This allows users to make their own functions written in .NET code usable in HDevelop and HDevEngine in just a few steps, while benefiting from the wide range of functionalities offered by the .NET framework. Even the data types and HALCON operators known from the HALCON/.NET language interface can be used. This increases both the flexibility and the application possibilities of HALCON.

About MVTec Software GmbHMVTec is a leading manufacturer of standard software for machine vision. MVTec products are used in all demanding areas of imaging: semiconductor industry, surface inspection, automatic optical inspection systems, quality control, metrology, as well as medicine and surveillance. By providing modern technologies such as 3D vision, deep learning, and embedded vision, software by MVTec also enables new automation solutions for the Industrial Internet of Things aka Industry 4.0. With locations in Germany, the USA, and China, as well as an established network of international distributors, MVTec is represented in more than 35 countries worldwide. http://www.mvtec.com

About MVTec HALCONMVTec HALCON is the comprehensive standard software for machine vision with an integrated development environment (HDevelop) that is used worldwide. It enables cost savings and improved time to market. HALCON's flexible architecture facilitates rapid development of any kind of machine vision application. MVTec HALCON provides outstanding performance and a comprehensive support of multi-core platforms, special instruction sets like AVX2 and NEON, as well as GPU acceleration. It serves all industries, with a library used in hundreds of thousands of installations in all areas of imaging like blob analysis, morphology, matching, measuring, and identification. The software provides the latest state-of-the-art machine vision technologies, such as comprehensive 3D vision and deep learning algorithms. The software secures your investment by supporting a wide range of operating systems and providing interfaces to hundreds of industrial cameras and frame grabbers, in particular by supporting standards like GenICam, GigE Vision, and USB3 Vision. By default, MVTec HALCON runs on Arm-based embedded vision platforms. It can also be ported to various target platforms. Thus, the software is ideally suited for the use within embedded and customized systems. http://www.halcon.com, http://www.embedded-vision-software.com

Originally posted here:
MVTec further expands HALCON functionality with new deep ... - Robotics Tomorrow