Archive for the ‘Free Software’ Category

Police Facial Recognition Technology Can’t Tell Black People Apart – Scientific American

Imagine being handcuffed in front of your neighbors and family for stealing watches. After spending hours behind bars, you learn that the facial recognition software state police used on footage from the store identified you as the thief. But you didnt steal anything; the software pointed cops to the wrong guy.

Unfortunately this is not a hypothetical. This happened three years ago to Robert Williams, a Black father in suburban Detroit.Sadly Williams story is not a one-off. In a recent case of mistaken identity, facial recognition technology led to the wrongful arrest of a Black Georgian for purse thefts in Louisiana.

Ourresearch supports fears that facial recognition technology (FRT) can worsen racial inequities in policing. We found that law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms training data sets, a belief that these programs are infallible and a tendency of officers own biases to magnify these issues.

While no amount of improvement will eliminate the possibility of racial profiling, we understand the value of automating the time-consuming, manual face-matching process. We also recognize the technologys potential to improve public safety. However, considering the potential harms of this technology, enforceable safeguards are needed to prevent unconstitutional overreaches.

FRT is an artificial intelligencepowered technology that tries to confirm the identity of a person from an image. The algorithms used by law enforcement are typically developed by companies like Amazon, Clearview AI and Microsoft, which build their systems for different environments.Despite massive improvements in deep-learning techniques, federal testing shows that most facial recognition algorithms perform poorly at identifying people besides white men.

Civil rights advocates warn that the technology struggles to distinguish darker faces, which will likely lead to more racial profiling and more false arrests. Further, inaccurate identification increases the likelihood of missed arrests.

Still some government leaders, including New Orleans Mayor LaToya Cantrell, tout this technology's ability to help solve crimes. Amid the growing staffing shortages facing police nationwide, some champion FRT as a much-needed police coverage amplifierthat helps agencies do more with fewer officers. Such sentiments likely explain why more than one quarter of local and state police forces and almost half of federal law enforcement agencies regularly access facial recognition systems, despite their faults.

This widespread adoption poses a grave threat to our constitutional right against unlawful searches and seizures.

Recognizing the threatto our civil liberties, cities like San Francisco and Boston banned or restricted government use of this technology. At the federal level President Bidens administration released the Blueprint for an AI Bill of Rights in 2022. While intended to incorporate practices that protect our civil rights in the design and use of AI technologies, the blueprints principles are nonbinding. In addition, earlier this year congressional Democrats reintroduced the Facial Recognition and Biometric Technology Moratorium Act. This bill would pause law enforcements use of FRT until policy makers can create regulations and standards that balance constitutional concerns and public safety.

The proposed AI bill of rights and the moratorium are necessary first steps in protecting citizens from AI and FRT. However, both efforts fall short. The blueprint doesnt cover law enforcements use of AI, and the moratorium only limits the use of automated facial recognition by federal authoritiesnot local and state governments.

Yet as the debate heats up over facial recognitions role in public safety, our research and others show how even with mistake-free software, this technology will likely contribute to inequitable law enforcement practices unless safeguards are put in place for nonfederal use too.

First, the concentration of police resources in many Black neighborhoods already results in disproportionate contact between Black residents and officers. With this backdrop, communities served by FRT-assisted police are more vulnerable to enforcement disparities, as the trustworthiness of algorithm-aided decisions is jeopardized by the demands and time constraints of police work, combined with an almost blind faith in AI that minimizes user discretion in decision-making.

Police typically use this technology in three ways: in-field queries to identify stopped or arrested persons, searches of video footage or real-time scans of people passing surveillance cameras. The police upload an image, and in a matter of seconds the software compares the image to numerous photos to generate a lineup of potential suspects.

Enforcement decisions ultimately lie with officers. However, people often believe that AI is infallible and dont question the results. On top of this using automated tools is much easier than making comparisons with the naked eye.

AI-powered law enforcement aids also psychologically distance police officers from citizens. This removal from the decision-making process allows officers to separate themselves from their actions. Usersalso sometimes selectively follow computer-generated guidance, favoring advice that matches stereotypes, including those about Black criminality.

Theres no solid evidence that FRT improves crime control. Nonetheless, officials appear willing to tolerate these racialized biases as cities struggle to curb crime.This leaves people vulnerable to encroachments on their rights.

The time for blind acceptance of this technology has passed. Software companies and law enforcement must take immediate steps towards reducing the harms of this technology.

For companies, creating reliable facial recognition software begins with balanced representation among designers. In the U.S. most software developers are white men. Research shows the software is much better at identifying members of the programmers race. Experts attribute such findings largely to engineers unconscious transmittal of own-race bias into algorithms.

Own-race bias creeps in as designers unconsciously focus on facial features familiar to them. The resulting algorithm is mainly tested on people of their race. As such many U.S.-made algorithms learn by looking at more white faces, which fails to help them recognize people of other races.

Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.

We believe that the companies that make these products need to take staff and image diversity into account. However, this does not remove law enforcements responsibility. Police forces must critically examine their methods if we want to keep this technology from worsening racial disparities and leading to rights violations.

For police leaders, uniform similarity score minimums must be applied to matches. After the facial recognition software generates a lineup of potential suspects, it ranks candidates based on how similar the algorithm believes the images are. Currently departments regularly decide their own similarity score criteria, which some experts contend raises the chances for wrongful and missed arrests.

FRTs adoption by law enforcement is inevitable, and we see its value. But if racial disparities already exist in enforcement outcomes, this technology will likely exacerbate inequities like those seen in traffic stops and arrests without adequate regulation and transparency.

Fundamentally police officers need more training on FRTs pitfalls, human biases and historical discrimination. Beyond guiding officers who use this technology, police and prosecutors should also disclose that they used automated facial recognition when seeking a warrant.

Although FRT isnt foolproof, following these guidelines will help defend against uses that drive unnecessary arrests.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those ofScientific American.

See the original post:
Police Facial Recognition Technology Can't Tell Black People Apart - Scientific American

Porsche Taycan Gets EV Charging Station Finder in Apple Maps – Car and Driver

Porsche has added integration for Apple Maps to include charger locations for U.S. Taycan models, giving CarPlay users yet another reason to stick with the software. The car was already equipped with Porsche's native charging planner, which can suggest stops based on information like the vehicle's state of charge (SOC), expected traffic conditions, and average speed. But the reality is that most owners seem to prefer third-party software like Apple CarPlay and Android Auto. As for Android, a Porsche spokesperson told Car and Driver that the Taycan does come with Android Auto capability as standard, but it doesn't have the EV SOC integration or charge stop suggestions that the new CarPlay system does.

Porsche

The new integration means that Taycan owners won't need to leave CarPlay or settle for using the native navigation system when trying to map out charging stops. On top of doing a lot of the same quality-of-life things the native system does (like analyze SOC and expected traffic), the Apple system can also analyze elevation changes along a given route to get a more accurate estimation of battery usage. According to Porsche, if you allow the vehicle's SOC to deplete to a low enough margin, the new software will automatically offer a route to the nearest compatible charging station.

The system relies on both CarPlay and the information fed to it from the vehicle. That means the normal Apple Maps app on your phone won't give the same charging recommendations. The system should work with any Taycan, but according to Porsche, any models from 2021 or earlier will need to go to a service center for a free software update. Porsche also provided a link for setup and FAQs for the software, which can be found here.

This content is imported from poll. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Associate News Editor

Jack Fitzgeralds love for cars stems from his as yet unshakable addiction to Formula 1. After a brief stint as a detailer for a local dealership group in college, he knew he needed a more permanent way to drive all the new cars he couldnt afford and decided to pursue a career in auto writing. By hounding his college professors at the University of Wisconsin-Milwaukee, he was able to travel Wisconsin seeking out stories in the auto world before landing his dream job at Car and Driver. His new goal is to delay the inevitable demise of his 2010 Volkswagen Golf.

See the rest here:
Porsche Taycan Gets EV Charging Station Finder in Apple Maps - Car and Driver

Tesla to roll out free Full Self-Driving software, but there’s a catch. Know here – HT Auto

Tesla is planning to roll out the Full Self-Driving (FSD) software for its consumers for free. Tesla CEO Elon Musk has said that the company plans to offer its customers the FSD for free for one month as a trial. Musk has confirmed via a tweet that all Tesla car owners in North America can avail of a one-month FSD free trial. Also, after that the company will roll out the software for its global consumers in other regions around the world.

By: HT Auto Desk Updated on: 15 May 2023, 13:11 PM

As Tesla is aiming to get more users to sample its much-hyped FSD software, the company believes a one-month free trial will offer the consumers a chance to try and test the technology that is claimed to allow the vehicles to run autonomously without any driver interference, a significantly advanced version of the car manufacturer's existing semi-autonomous driver assisting technology known as Autopilot. Tesla CEO Elon Musk was responding to a tweet from a user who wanted to know when the subscription option for FSD would be released in Canada. The billionaire confirmed that the free trials would be coming soon, paving the way for the subscriptions.

Also Read : Delhi Electric Vehicle Policy to be revised in 2023. What to expect

Currently, Tesla is offering the FSD software's beta version to a select number of consumers. A few days back, Musk hinted that Tesla would roll out the FSD soon once it's fully functional and glitch-free. His latest tweet further indicates that the auto company is nearing a smoother functional FSD to avoid the embarrassment it faced when it rolled out the software for the first time and the technology was found glitchy. Once FSD is super smooth (not just safe), we will roll out a free month trial for all cars in North America. Then extend to rest of world after we ensure it works well on local roads and regulators approve it in that country," Musk wrote in his latest tweet. However, despite hinting at a nearing rollout of the software, Tesla or its CEO has not given a specific timeframe for the launch.

First Published Date: 15 May 2023, 13:11 PM IST

Read more here:
Tesla to roll out free Full Self-Driving software, but there's a catch. Know here - HT Auto

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. – The New York Times

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the systems underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software computer code that can be freely copied, modified and reused providing outsiders witheverything they needed to quickly build chatbots of their own.

The platform that will win will be the open one, Yann LeCun, Metas chief A.I. scientist, said in an interview.

As a race to lead A.I. heats up across Silicon Valley, Meta is standing out from its rivals by taking a different approach to the technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Google, OpenAI and others have been critical of Meta, saying an unfettered open-source approach is dangerous. A.I.srapid rise in recent months has raised alarm bells about the technologys risks, including how it could upend the job market if it is not properly deployed. And within days of LLaMAs release, the system leaked onto 4chan, the online message board known for spreading false and misleading information.

We want to think more carefully about giving away details or open sourcing code of A.I. technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee A.I. work. Where can that lead to misuse?

Some within Google have also wondered if open-sourcing A.I. technology may pose a competitive threat. In a memo this month, which was leaked on the online publication Semianalysis.com, a Google engineer warned colleagues that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their lead in A.I.

But Meta said it saw no reason to keep its code to itself. The growing secrecy at Google and OpenAI is a huge mistake, Dr. LeCun said, and a really bad take on what is happening. He argues that consumers and governments will refuse to embrace A.I. unless it is outside the control of companies like Google and Meta.

Do you want every A.I. system to be under the control of a couple of powerful American companies? he asked.

OpenAI declined to comment.

Metas open-source approach to A.I. is not novel. The history of technology is littered with battles between open source and proprietary,or closed, systems. Some hoard the most important tools that are used to build tomorrows computing platforms, while others give those tools away. Most recently, Google open-sourced the Android mobile operating system to take on Apples dominance in smartphones.

Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I. That shift began last year when OpenAI released ChatGPT. The chatbots wild success wowed consumers and kicked up the competition in the A.I. field, with Google moving quickly to incorporate more A.I. into its products and Microsoft investing $13 billion in OpenAI.

While Google, Microsoft and OpenAI have since received most of the attention in A.I., Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and the hardware needed to realize chatbots and other generative A.I., which produce text, images and other media on their own.

In recent months, Meta has worked furiously behind the scenes to weave its years of A.I. research and development into new products. Mr. Zuckerberg is focused on making the company an A.I. leader, holding weekly meetings on the topic with his executive team and product leaders.

On Thursday, in a sign of its commitment to A.I., Meta said it had designed a new computer chip and improved a new supercomputer specifically for building A.I. technologies. It is also designing a new computer data center with an eye toward the creation of A.I.

Weve been building advanced infrastructure for A.I. for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do, Mr. Zuckerberg said.

Metas biggest A.I. move in recent months was releasing LLaMA, which is what is known as a large language model, or L.L.M. (LLaMA stands for Large Language Model Meta AI.) L.L.M.s are systems that learn skills byanalyzing vast amounts of text, including books, Wikipedia articles and chat logs. ChatGPT and Googles Bard chatbot are also built atop such systems.

L.L.M.s pinpoint patterns in the text they analyze and learn to generate text of their own, including term papers, blog posts, poetry and computer code. They can even carry on complex conversations.

In February, Meta openly released LLaMA, allowingacademics, government researchers and others who provided their email address todownload the code and use it to build a chatbot of their own.

But the company went further than many other open-source A.I. projects. Itallowed people to download a version of LLaMA after it had been trained on enormous amounts of digital text culled from the internet. Researchers call this releasing the weights, referring to the particular mathematical values learned by the system as it analyzes data.

This was significant because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies do not have. Those who have the weights can deploy the software quickly, easily and cheaply, spending a fraction of what it would otherwise cost to create such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days, someone released the LLaMA weights onto 4chan.

At Stanford University, researchers used Metas new technology to build their own A.I. system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one instance, the system provided instructions for disposing of a dead body without being caught.It also generated racistmaterial, including commentsthat supported the views of Adolf Hitler.

In a private chat among the researchers, which was seen by The Times, Mr. Doumbouya said distributing the technology to the public would be like a grenade available to everyone in a grocery store. He did not respond to a request for comment.

Stanford promptly removed the A.I. system from the internet. The project was designed to provide researchers with technology that captured the behaviors of cutting-edge A.I. models, said Tatsunori Hashimoto, the Stanford professor who led the project. We took the demo down as we became increasingly concerned about misuse potential beyond a research setting.

Dr. LeCun argues that this kind of technology is not as dangerous as it might seem. He said small numbers of individuals could already generate and spread disinformation and hate speech. He added that toxic material could be tightly restricted by social networks such as Facebook.

You cant preventpeople from creating nonsense or dangerous information or whatever, he said. But you can stop it from being disseminated.

For Meta, more people using open-source software can also level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds programs using Metas tools, it could help entrench the company for the next wave of innovation, staving off potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta was committed toopen-sourcing A.I. technology. He said the evolution of the consumer internet was the result of open, communal standards that helped build the fastest, most widespread knowledge-sharing network the world had ever seen.

Progress is faster when it is open, he said. You have a more vibrant ecosystem where everyone can contribute.

Read this article:
Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. - The New York Times

You may not care where you download software from, but malware … – We Live Security

Why do people still download files from sketchy places and get compromised as a result?

One of the pieces of advice that security practitioners have been giving out for the past couple of decades, if not longer, is that you should only download software from reputable sites. As far as computer security advice goes, this seems like it should be fairly simple to practice.

But even when such advice is widely shared, people still download files from distinctly nonreputable places and get compromised as a result. I have been a reader of Neowin for over a couple of decades now, and a member of its forum for almost that long. But that is not the only place I participate online: for a little over three years, I have been volunteering my time to moderate a couple of Reddits forums (subreddits) that provide both general computing support as well as more specific advice on removing malware. In those subreddits, I have helped people over and over again as they attempted to recover from the fallout of compromised computers. Attacks these days are usually financially motivated, but there are other unanticipated consequences as well. I should state this is not something unique to Reddits users. These types of questions also come up in online chats on various Discord servers where I volunteer my time as well.

One thing I should point out is that both the Discord and Reddit services skew to a younger demographic than social media sites such as Twitter and Facebook. I also suspect they are younger than the average WeLiveSecurity reader. These people grew up digitally literate and have had access to advice and discussions about safe computing practices available since pre-school.

Despite having the advantage of having grown up with computers and information on securing them, how is it that these people have fallen victim to certain patterns of attacks? And from the information security practitioners side, where exactly is the disconnect occurring between what were telling people to do (or not do, as the case may be), and what they are doing (or, again, not doing)?

Sometimes, people will openly admit that they knew better but just did a dumb thing, trusting the source of the software when they knew it was not trustworthy. Sometimes, though, it appeared trustworthy, but was not. And at other times, they had very clearly designated the source of the malware as trustworthy even when it was inherently untrustworthy. Let us take a look at the most common scenarios that lead to their computers being compromised:

I would point out that these are not the only means by which people were tricked into running malware. WeLiveSecurity has reported on several notable cases recently that involved deceiving the user:

Do any of these scenarios seem similar to each other in any way? Despite the various means of receiving the file (seeking out versus being asked, using a search engine, video site or piracy site, etc.) they all have one thing in common: they exploited trust.

When security practitioners talk about downloading files only from reputable websites, it seems that we are often only doing half of the job of educating the public about them, or maybe even a little less, for that matter: weve done a far better job of telling people what kind of sites to go to (reputable ones, obviously) without explaining what makes a site safe to download from in the first place. So, without any fanfare, here is what makes a site reputable to download software from:

And thats it! In todays world of software, the publishers site could be a bit more flexible than what it historically has been. Yes, it could be a site with the same domain name as the publishers site, but it could also be that the files are located on GitHub, SourceForge, hosted on a content delivery network (CDN) operated by a third party, and so forth. That is still the publishers site, as it was explicitly uploaded by them. Sometimes, publishers provide additional links to additional download sites, too. This is done for a variety of reasons, such as to defray hosting costs, to provide faster downloads in different regions, to promote the software in other parts of the world, and so forth. These, too, are official download sites because they are specifically authorized by the author or publisher.

There are also sites and services that act as software repositories. SourceForge and GitHub are popular sites for hosting open-source projects. For shareware and trial versions of commercial software, there are numerous sites that specialize in listing their latest versions for downloading. These download sites function as curators for finding software in one place, which makes it easy to search and discover new software. In some instances, however, they also can have a darker side: Some of these sites place software wrappers around files downloaded from them that can prompt to install additional software besides the program you were looking for. These program bundlers may do things completely unrelated to the software they are attached to and may, in fact, install potentially unwanted applications (PUAs) on to your computer.

Other types of sites to be aware of are file locker services such as Box, Dropbox, and WeTransfer. While these are all very legitimate file sharing services, they can be abused by a threat actor: people may assume that because the service is trusted, programs downloaded from them are safe. Conversely, IT departments checking for the exfiltration of data may ignore uploads of files containing personal information and credentials because they are known to be legitimate services.

When it comes to search engines, interpreting their results can be tricky for the uninitiated, or people who are just plain impatient. While the goal of any search enginewhether it is Bing, DuckDuckGo, Google, Yahoo, or another is to provide the best and most accurate results, their core businesses often revolve around advertising. This means that the results at the top of the page in the search engine results are often not the best and most accurate results, but paid advertising. Many people do not notice the difference between advertising and search engine results, and criminals will take advantage of this through malvertising campaigns where they buy advertising space to redirect people to websites used for phishing and other undesirable activities, and malware. In some instances, criminals may register a domain name using typosquatting or a similar-looking top-level domain to that of the software publisher in order to make their website address less noticeable at first glance, such as example.com versus examp1e.com (note how the letter l has been released by the number 1 in the second domain).

I will point out that there are many legitimate, safe places to go on the internet to download free and trial versions of software, because they link to the publishers own downloads. An example of this is Neowin, for whom the original version of this article was written. Neowins Software download section does not engage in any type of disingenuous behavior. All download links either go directly to the publishers own files or to their web page, making Neowin a reliable source for finding new software. Another reputable site that links directly to software publishers downloads is MajorGeeks, which has been listing them on a near-daily basis for over two decades.

While direct downloading ensures that you get software from the company (or individual) that wrote it, that does not necessarily mean it is free of malware: there have been instances where malicious software was included in a software package, unintentionally or otherwise. Likewise, if a software publisher bundles potentially unwanted applications or adware with their software, then you will still receive that with a direct download from their site.

Special consideration should be applied to the various application software stores run by operating system vendors, such as the Apple App Store, the Google Play store, Microsofts Windows App stores, and so forth. One might assume these sites to be reputable download sites, and for the most part they are exactly that, but there is no 100% guarantee: Unscrupulous software authors have circumvented app stores vetting processes to distribute software that invade peoples privacy with spyware, display egregious advertisements with adware, and engage in other unwanted behaviors. These app stores do have the ability to de-list such software from their stores as well as remotely uninstall it from afflicted devices, which offers some remedy; however, this could be days or weeks (or more) after the software has been made available. Even if you only download apps from the official store, having security software on your device to protect it is a must.

Device manufacturers, retailers, and service providers may add their own app stores to devices; however, these may not have the ability to uninstall apps remotely.

With all of that in mind, you are probably wondering exactly what the malware did on the affected computers. While there were different families of malware involved, each of which having its own set of actions and behaviors, there were two that basically stood out because they were repeat offenders, which generated many requests for assistance.

And just in case you were wondering: I have never heard of anyone successfully decrypting their files after paying the ransom to the STOP/DJVU criminals. Your best bet at decrypting your files is to back them up in case a decryptor is ever released.

As far as its functionality goes, Redline Stealer performs some fairly common activities for information-stealing malware, such as collecting information about the version of Windows the PC is running, username, and time zone. It also collects some information about the environment where it is running, such as display size, the processor, RAM, video card, and a list of programs and processes on the computer. This may be to help determine if it is running in an emulator, virtual machine, or a sandbox, which could be a warning sign to the malware that it is being monitored or reverse engineered. And like other programs of its ilk, it can search for files on the PC and upload them to a remote server (useful for stealing private keys and cryptocurrency wallets), as well as download files and run them.

But the primary function of an information stealer is to steal information, so with that mind, what exactly does the Redline Stealer go after? It steals credentials from many programs including Discord, FileZilla, Steam, Telegram, various VPN clients such as OpenVPN and ProtonVPN), as well as cookies and credentials from web browsers such as Google Chrome, Mozilla Firefox, and their derivatives. Since modern web browsers do not just store accounts and passwords, but credit card info as well, this can pose a significant threat.

Since this malware is used by different criminal gangs, each of them might focus on something slightly different. In these instances, though, the targets were most often Discord, Google, and Steam accounts. The compromised Discord accounts were used to spread the malware to friends. The Google accounts were used to access YouTube and inflate views for certain videos, as well as to upload videos advertising various fraudulent schemes, causing the account to be banned. The Steam accounts were checked for games that had in-game currencies or items which could be stolen and used or resold by the attacker. These might seem like odd choices given all the things which can be done with compromised accounts, but for teenagers, these might be the most valuable online assets they possess.

To summarize, here we have two different types of malware that are sold as services for use by other criminals. In these instances, those criminals seemed to target victims in their teens and early twenties. In one case, extorting victims for an amount proportional to what sort of funds they might have; in the other case, targeting their Discord, YouTube (Google), and online games (Steam). Given the victimology, one has to wonder whether these criminal gangs are composed of people in similar age ranges, and if so, chose specific targeting and enticement methods they know would be highly effective against their peers.

Security practitioners advise people to keep their computers operating systems and applications up to date, to only use their latest versions, and to run security software from established vendors. And, for the most part: people do that, and it protects them from a wide variety of threats.

But when you start looking for sketchy sources to download from, things can take a turn for the worse. Security software does try to account for human behavior, but so do criminals who exploit concepts such as reputation and trust. When a close friend on Discord asks you to look at a program and warns that your antivirus software may incorrectly detect it as a threat, who are you going to believe, your security software or your friend? Programmatically responding to and defending against attacks on trust, which are essentially types of social engineering, can be difficult. In the type of scenarios explained here, it is user education and not computer code that may be the ultimate defense, but that is only if the security practitioners get the right messaging across.

The author would like to thank his colleagues Bruce P. Burrell, Alexandre Ct Cyr, Nick FitzGerald, Tom Foltn, Luk tefanko, and Righard Zwienenberg for their assistance with this article, as well as Neowin for publishing the original version of it.

Aryeh GoretskyDistinguished Researcher, ESET

Note: An earlier version of this article was published on tech news site Neowin.

View original post here:
You may not care where you download software from, but malware ... - We Live Security