Archive for the ‘Machine Learning’ Category

Research Associate / Postdoc – Machine Learning for Computer Vision job with TECHNISCHE UNIVERSITAT DRESDEN (TU DRESDEN) | 210323 – Times Higher…

At TU Dresden, Faculty of Computer Science, Institute of Artificial Intelligence, the Chair of Machine Learning for Computer Vision offers a position as

Research Associate / Postdoc

Machine Learning for Computer Vision

(subject to personal qualification employees are remunerated according to salary group E 14 TV-L)

starting at the next possible date. The position is limited for three years with the option of an extension. The period of employment is governed by the Fixed Term Research Contracts Act (Wissenschaftszeitvertragsgesetz - WissZeitVG). The position aims at obtaining further academic qualification. Balancing family and career is an important issue. The post is basically suitable for candidates seeking part-time employment. Please note this in your application.

Tasks:

Requirements:

Applications from women are particularly welcome. The same applies to people with disabilities.

Please submit your comprehensive application including the usual documents (CV, degree certificates, transcript of records, etc.) by 31.07.2020 (stamped arrival date of the university central mail service applies) preferably via the TU Dresden SecureMail Portal https://securemail.tu-dresden.de/ by sending it as a single PDF document to mlcv@tu-dresden.de or to: TU Dresden, Fakultt Informatik, Institut fr Knstliche Intelligenz, Professur fr Maschinelles Lernen fr Computer Vision, Herrn Prof. Dr. rer. nat. Bjrn Andres, Helmholtzstr. 10, 01069 Dresden. Please submit copies only, as your application will not be returned to you. Expenses incurred in attending interviews cannot be reimbursed.

Reference to data protection: Your data protection rights, the purpose for which your data will be processed, as well as further information about data protection is available to you on the website: https: //tu-dresden.de/karriere/datenschutzhinweis

Please find the german version under: https://tu-dresden.de/stellenausschreibung/7713.

See the rest here:
Research Associate / Postdoc - Machine Learning for Computer Vision job with TECHNISCHE UNIVERSITAT DRESDEN (TU DRESDEN) | 210323 - Times Higher...

Online learning is in and Coursera has been doing it for years with affordable college degrees – SILive.com

During the coronavirus pandemic, online learning came to the forefront for students of all ages.

While grammar schools, intermediate schools and high schools wont move to the web platform, colleges across the country realize they have to offer remote classes to stay competitive.

One online learning school called Coursera.org has been specializing in online education for years.

Coursera is a world-wide online learning platform founded in 2012 by Stanford professors Andrew Ng and Daphne Koller that offers massive open online courses, specializations and degrees.

Its not just another of the worlds run-of-the-mill online colleges. You will specialize in a particular field of your choice and once you graduate you will be ready to take on the world.

Right now, you can sign up for free and see what Coursera has to offer.

Coursera works with universities and other organizations to offer online courses, specializations, and degrees in a variety of subjects, such as engineering, data science, machine learning, mathematics, business, computer science, digital marketing, humanities, medicine, biology, social sciences and others.

And you can get a certified degree in a lot less time it takes at traditional colleges and universities.

According to Wikipedia, courses last approximately 4 to 10 weeks, with one to two hours of video lectures a week.

These courses provide quizzes, weekly exercises, peer-graded assignments, and sometimes a final project or exam. Courses are also provided on-demand, in which case users can take their time in completing the course with all of the material available at once. As of May 2015, Coursera offered 104 on-demand courses.

As of 2017, Coursera offered full masters degrees.

The cost you might ask? Well, you wont have to break the bank, compared to some traditional colleges and universities.

Coursera offers some free courses, but the cost of individual courses which last 4 to 6 weeks range in price from $29 to $99.

Specialized programs, which can last 4-6 months, are $39-$79 per month.

An online degree, which can take 1-3 years can range from $15,000 to $25,000, a steep discount from what private colleges and universities charge.

Click here to register now for free and explore all Coursera has to offer.

Read more from the original source:
Online learning is in and Coursera has been doing it for years with affordable college degrees - SILive.com

The key differences between rule-based AI and machine learning – The Next Web

Companies across industries are exploring and implementingartificial intelligence(AI) projects, from big data to robotics, to automate business processes, improve customer experience, and innovate product development. According toMcKinsey, embracing AI promises considerable benefits for businesses and economies through its contributions to productivity and growth. But with that promise comes challenges.

Computers and machines dont come into this world with inherent knowledge or an understanding of how things work. Like humans, they need to be taught that a red light means stop and green means go. So, how do these machines actually gain the intelligence they need to carry out tasks like driving a car or diagnosing a disease?

There are multiple ways to achieve AI, and existential to them all is data. Withoutquality data, artificial intelligence is a pipedream. There are two ways data can be manipulatedeither through rules or machine learningto achieve AI, and some best practices to help you choose between the two methods.

Long before AI and machine learning (ML) became mainstream terms outside of the high-tech field, developers were encoding human knowledge into computer systems asrules that get stored in a knowledge base. These rules define all aspects of a task, typically in the form of If statements (if A, then do B, else if X, then do Y).

While the number of rules that have to be written depends on the number of actions you want a system to handle (for example, 20 actions means manually writing and coding at least 20 rules), rules-based systems are generally lower effort, more cost-effective and less risky since these rules wont change or update on their own. However, rules can limit AI capabilities with rigid intelligence that can only do what theyve been written to do.

While a rules-based system could be considered as having fixed intelligence, in contrast, amachine learning systemis adaptive and attempts to simulate human intelligence. There is still a layer of underlying rules, but instead of a human writing a fixed set, the machine has the ability to learn new rules on its own, and discard ones that arent working anymore.

In practice, there are several ways a machine can learn, butsupervised trainingwhen the machine is given data to train onis generally the first step in a machine learning program. Eventually, the machine will be able to interpret, categorize, and perform other tasks with unlabeled data or unknown information on its own.

The anticipated benefits to AI are high, so the decisions a company makes early in its execution can be critical to success. Foundational is aligning your technology choices to the underlying business goals that AI was set forth to achieve.What problems are you trying to solve, or challenges are you trying to meet?

The decision to implement a rules-based or machine learning system will have a long-term impact on how a companys AI program evolves and scales. Here are some best practices to consider when evaluating which approach is right for your organization:

When choosing a rules-based approach makes sense:

The promises of AI are real, but for many organizations, the challenge is where to begin. If you fall into this category, start by determining whether a rules-based or ML method will work best for your organization.

This article was originally published byElana Krasner on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published June 13, 2020 13:00 UTC

Read more:
The key differences between rule-based AI and machine learning - The Next Web

Using Machine Learning to Accurately Predict Rock Thermal Conductivity for Enhanced Oil Production – SciTechDaily

Skoltech scientists and their industry colleagues have found a way to use machine learning to accurately predict rock thermal conductivity. Credit: Pavel Odinev / Skoltech

Skoltech scientists and their industry colleagues have found a way to use machine learning to accurately predict rock thermal conductivity, a crucial parameter for enhanced oil recovery. The research, supported by Lukoil-Engineering LLC, was published in the Geophysical Journal International.

Rock thermal conductivity, or its ability to conduct heat, is key to both modeling a petroleum basin and designing enhanced oil recovery (EOR) methods, the so-called tertiary recovery that allows an oil field operator to extract significantly more crude oil than using basic methods. A common EOR method is thermal injection, where oil in the formation is heated by various means such as steam, and this method requires extensive knowledge of heat transfer processes within a reservoir.

For this, one would need to measure rock thermal conductivity directly in situ, but this has turned out to be a daunting task that has not yet produced satisfactory results usable in practice. So scientists and practitioners turned to indirect methods, which infer rock thermal conductivity from well-logging data that provides a high-resolution picture of vertical variations in rock physical properties.

Today, three core problems rule out any chance of measuring thermal conductivity directly within non-coring intervals. It is, firstly, the time required for measurements: petroleum engineers cannot let you put the well on hold for a long time, as it is economically unreasonable. Secondly, induced convection of drilling fluid drastically affects the results of measurements. And finally, there is the unstable shape of boreholes, which has to do with some technical aspects of measurements, Skoltech Ph.D. student and the papers first author Yury Meshalkin says.

Known well-log based methods can use regression equations or theoretical modeling, and both have their drawbacks having to do with data availability and nonlinearity in rock properties. Meshalkin and his colleagues pitted seven machine learning algorithms against each other in the race to reconstruct thermal conductivity from well-logging data as accurately as possible. They also chose a Lichtenecker-Asaads theoretical model as a benchmark for this comparison.

Using real well-log data from a heavy oil field located in the Timan-Pechora Basin in northern Russia, researchers found that, among the seven machine-learning algorithms and basic multiple linear regression, Random Forest provided the most accurate well-log based predictions of rock thermal conductivity, even beating the theoretical model.

If we look at todays practical needs and existing solutions, I would say that our best machine learning-based result is very accurate. It is difficult to give some qualitative assessment as the situation can vary and is constrained to certain oil fields. But I believe that oil producers can use such indirect predictions of rock thermal conductivity in their EOR design, Meshalkin notes.

Scientists believe that machine-learning algorithms are a promising framework for fast and effective predictions of rock thermal conductivity. These methods are more straightforward and robust and require no extra parameters outside common well-log data. Thus, they can radically enhance the results of geothermal investigations, basin and petroleum system modelling and optimization of thermal EOR methods, the paper concludes.

Reference: Robust well-log based determination of rock thermal conductivity through machine learning by Yury Meshalkin, Anuar Shakirov, Evgeniy Popov, Dmitry Koroteev and Irina Gurbatova, 5 May 2020, Geophysical Journal International.DOI: 10.1093/gji/ggaa209

Go here to read the rest:
Using Machine Learning to Accurately Predict Rock Thermal Conductivity for Enhanced Oil Production - SciTechDaily

Tamr: Machine Learning Can Be Used to Transform Creative Talent Management – Media & Entertainment Services Alliance M&E Daily Newsletter

Machine learning can be used by the best talent managers today to transform creative talent management and find the right opportunities for their clients, according to Matt Holzapfel, solutions lead at enterprise data unification and data mastering specialist Tamr.

In an industry that runs on storytelling, its stories are increasingly informed by huge amounts of data: hundreds of datasets, millions of records and billions of data points (including tweets) from sources inside and outside the business. By using machine learning to serve up analytics-ready data from disparate data, creative talent management firms can create very human stories with mutually successful outcomes for clients and media companies time and time again.

Tamr helps large organizations clean up dirty data so that they can get that data ready for their analytic and digital transformation aspirations, Holzapfel said during a May 27 presentation at the Hollywood Innovation and Transformation Summit (HITS) Liveevent.

During the presentation Using Machine Learning to Transform Creative Talent Management, he explained how Tamr helped Creative Artists Agency specifically use machine learning to take a new lens to what the data management ecosystem should look like in order to transform how they were using data and analytics within the company.

In the process, Tamr was able to dramatically increase the throughput of their analytics and help drive more insight for their agents, he said.

Within every industry, the old saying is your biggest assets leave in the elevator every night, he noted, adding: Within entertainment, nothing is more true in that people are the entertainment industrys biggest asset. The actors, the musicians, the artists that people pay to see [are] really at the heart of the entertainment industry.

And he pointed out that one of the biggest challenges within the industry is how you match the right talents, the right piece of content for the right audience.

It is often not the end analytic that is the most challenging part, he told viewers, explaining: I think in a lot of cases, when were talking about data, were usually thinking about those analytics: the visualization, the model whatever it is that comes out the other end that helps us make a decision. However, what often is the biggest bottleneck is the data around it, he said.

As an example, he noted that we can look at actor Vin Diesel and try to gauge his social reach, the top demographics that include his fans and what an ideal role for him would be where a company could attract a big audience and be successful.

If the data is readily available at our fingertips and nicely organized, then these questions become pretty quick to answer, he said, adding: We can answer these questions in seconds. But often today they take weeks [to answer] because the data itself is not neatly organized. If we want to understand who is Vin Diesels target market [and] what roles should we put him in, that involves pulling audience data, YouTube data, social media data about what are people talking about [and] what the sentiment is like.

Some of that data is structured and some of it is unstructured, he noted. But the bottom line is that its extremely buried and scattered everywhere and so it makes it difficult to even have the information needed in order to make decisions confidently, he told viewers.

At the end of the day, any decision within this industry is a bit of a leap of faith, but without the data to back it up, youre often just kind of flying blind, he said.

Once you get the data organized in a warehouse, the next problem that companies face is the data itself is dirty, he noted.

If you want to figure out the impact of, for example, Steve Carell on the TV show The Office, you have to sift through all of this data, and just wrangling and organizing all this data is often the bottleneck for such analytics, he said.

That was a key part of the bottleneck at CAA no matter how much data they were acquiring, they were just running into more and more issues with actually making the data usable, he told viewers.

However, the good news is that, particularly over the past handful or so years, the tools that are available the solutions in the market have evolved quite a bit and we now have what we need in order to solve this problem, he stressed.

Traditionally, the way the market has looked at this problem has been kind of twofold: You have your source system in which you just collect all the data you need so everything is in a warehouse or data lake, and then you need people who can analyze all that data and figure it out, he noted.

The problem with that, however, is many of those analysts, who are very scarce and difficult to come by, end up spending a lot of their time doing one-off cleanup and data preparation, and not on analytics, he pointed out. These kinds of human-intensive approaches are difficult to maintain and lead to poor productivity, he said.

However, what used to take weeks to gain insight now takes only minutes because companies are starting to see their data as an asset and are focused on the data engineering, enabling the prep to be done upstream, he told viewers. That is dramatically reducing the amount of time analysts and data scientists are spending preparing and getting the data right, he said.

And CAA is one of the best examples that weve seen in the media and entertainment industry of reducing the time to insight from two weeks down to two seconds, he noted.

He went on to stress: There isnt one silver bullet to solving this problem. There isnt a single suite or a single solution that you can buy thats going to do everything that you need to do in order to solve this problem.

Fortunately, CAA recognized early that it would need to invest in next-generation tools that are open and interoperable and enable you to have that agility to do it, he said. Also important was its shift to modern, cloud-based tools, he said, adding CAA took a completely cloud-first approach to the challenge.

Clickherefor the presentation slide deck.

The May 27 HITS Live event tackled the quickly shifting IT needs of studios, networks and media service providers, along with how M&E vendors are stepping up to meet those needs. The all-live, virtual, global conference allowed for real-time Q&A, one-on-one chats with other attendees, and more.

HITS Live was presented by Microsoft Azure, with sponsorship by RSG Media, Signiant, Tape Ark, Whip Media Group, Zendesk, Eluvio, Sony, Avanade, 5th Kind, Tamr, EIDR and the Trusted Partner Network (TPN). The event is produced by the Media & Entertainment Services Alliance (MESA) and the Hollywood IT Society (HITS), in association with the Content Delivery & Security Association (CDSA) and the Smart Content Council.

For more information, clickhere.

View post:
Tamr: Machine Learning Can Be Used to Transform Creative Talent Management - Media & Entertainment Services Alliance M&E Daily Newsletter