Archive for the ‘Free Software’ Category

RIB Software launches free-to-use RIB Carbon Quantifier for … – GlobeNewswire

RIB Software launches free-to-use RIB Carbon Quantifier for optimized carbon quantification in construction.

RIB Software, a leading multinational provider of innovative technology solutions to the architecture, engineering, and construction (AEC) industries, has partnered with non-profit organization, Building Transparency, to develop the RIB Carbon Quantifier.

Through a direct link with Building Transparencys Embodied Carbon in Construction Calculator (EC3), the application allows users to quickly, easily and consistently allocate carbon environmental product declaration (EPD) values against their imported estimate data, including resource quantities.

This comes as studies highlight the industrys impact on the environment, with construction constituting a staggering 39% of global greenhouse gas emissions and accounting for 36% of global energy demands.

Ren Wolf, Chief Executive Officer at RIB Software, says there is a growing need to reduce carbon emissions within the engineering and construction sector to combat the damaging effects of climate change.

Until recently, the construction industrys response to reducing building-related emissions has focused on energy efficiency by reducing operational emissions - the energy used to heat, cool and light buildings.

While previously effective, this approach overlooks embodied carbon emissions associated with material and construction processes across a buildings lifecycle, which represents a quarter of the carbon emitted during the life of a building and 11% of all global carbon emissions.

It is therefore critical to increase efforts to quantify, monitor, evaluate and ultimately reduce the embodied carbon emitted throughout the lifecycle of a construction project - and the driving factor for why RIB Software developed the Carbon Quantifier application, explains Wolf.

Through a seamless integration with Building Transparencys premier, free-to-use EC3 tool, the RIB Carbon Quantifier application is set to be the first of its kind in the industry to assist the built environment in extracting and quantifying embodied carbon data more quickly and efficiently, optimizing carbon quantification and reducing estimate delivery times.

Through this powerful partnership of technology, industry professionals will now have access to an additional toolset to optimize the efficiency in the quantification and measurement of carbon, thus better facilitating design and procurement decisions, helping the global AEC industry in achieving a lower embodied carbon footprint.

Stacy Smedley, Executive Director of Building Transparency, says reducing embodied carbon emissions is one of the biggest opportunities in the fight against global warming. Partnerships, like ours with RIB Software, are critical to driving action in the building sector and identifying new solutions that make it easier to prioritize low-carbon decision-making on projects. Its exciting to have our EC3 data and its large carbon impact database be utilized and leveraged for tools like the RIB Carbon Quantifier.

The primary features of the RIB Carbon Quantifier include the easy extraction of embodied carbon data from Building Transparencys EC3 database against estimates; keeping a repository of each estimates embodied carbon data for cross-referencing and facilitating easier allocation of carbon values for future projects; the ability to easily export aligned quantified data back to EC3 for analytics, reporting and dashboarding; and providing users with a direct integration with other estimating products within the RIB portfolio of products.

Wolf says the need to accelerate decarbonization practices in the AEC industry is critical, and using an application like the RIB Carbon Quantifier will not only allow users to optimize carbon quantification, but will ensure they avoid unnecessary energy spent and emissions generated from associated projects and processes.

At RIB, we are driven by transformative digital technologies, industry best-practice and trends that help propel the industry forward and make engineering and construction more efficient and sustainable, he concludes.

For more information about the RIB Carbon Quantifier, emailcq@rib-software.comor visitrib-software.com/en/home/carbon-quantifier.

[ENDS]

Press Enquiries

Kim Immelmankim.immelman@rib-software.com

Go here to read the rest:
RIB Software launches free-to-use RIB Carbon Quantifier for ... - GlobeNewswire

Read the letter: Twitter accuses Microsoft of using its data in unauthorized ways – CNBC

Elon Musk, CEO of Tesla, speaks with CNBC on May 16th, 2023.

David A. Grogan | CNBC

Twitter is accusing Microsoft of using the social media company's data in ways that were unauthorized and never disclosed.

Alex Spiro, a partner at Quinn Emanuel Urquhart & Sullivan and attorney for Twitter owner Elon Musk, sent a letter to Microsoft on Thursday laying out the claims, including that the software company "may have been in violation of multiple provisions" of its agreement with Twitter over data use.

It's the latest rift among tech companies in the growing debate over who owns data that can be used to train artificial intelligence and machine learning software. The New York Times first reported on the letter, a copy of which was obtained by CNBC.

After Musk led a buyout of Twitter in October and appointed himself CEO, the company started charging for use of its application programming interface, which enables developers to embed tweets into their software and services and access Twitter data.

The API was previously free to use for some researchers, partners and developers who agreed to Twitter's terms. Twitter API-driven apps include Hootsuite, Sprout Social and Sprinklr.

According to the letter from Spiro to Microsoft CEO Satya Nadella and the company's board, last month Microsoft "declined to pay even a discounted rate for continued access to Twitter's APIs and content."

As of April, Microsoft had at least five products that used the Twitter API, including the Azure cloud, Bing search engine and Power Platform low-code application development tools, Spiro wrote.

The agreement restricts excessive use of Twitter's programming interfaces. However, for one of the Microsoft services using Twitter data, "account information outright states that it intends to allow its customers to 'go around throttling limits,'" Spiro wrote.

A Microsoft spokesperson acknowledged receipt of the letter and told CNBC the company will review it and "respond appropriately."

"Today we heard from a law firm representing Twitter with some questions about our previous use of the free Twitter API," the spokesperson said in an email. "We look forward to continuing our long-term partnership with the company."

Musk has been openly critical of Microsoft's tight relationship with OpenAI, the creator of the chatbot ChatGPT. Musk was an early backer of OpenAI, but the company has since raised billions of dollars from Microsoft, which is embedding its AI technology into many core products.

"Microsoft has a very strong say, if not directly controls, OpenAI at this point," Musk told CNBC in an interview this week. Nadella recently challenged Musk's claim in an interview with CNBC's Andrew Ross Sorkin, saying Microsoft has "a noncontrolling interest" in the startup.

Spiro did not name OpenAI or mention its ChatGPT and DALL-E applications or large language models in the letter. He did press Microsoft for any details about, "a description of any token pooling implemented in any of the Microsoft Apps, including the time period(s) when any such token pooling occurred and the number of tokens that were pooled."

Musk and Nadella have had other interactions of late.

Last year, Musk approached Nadella as he was raising money for his Twitter buyout, according to text messages that became public via court filings. Nadella wrote in one text to Musk, "will for sure follow-up on Teams feedback!" Teams is Microsoft's chat app.

Read the full letter from Twitter to Microsoft, here.

Here is the original post:
Read the letter: Twitter accuses Microsoft of using its data in unauthorized ways - CNBC

Police Facial Recognition Technology Can’t Tell Black People Apart – Scientific American

Imagine being handcuffed in front of your neighbors and family for stealing watches. After spending hours behind bars, you learn that the facial recognition software state police used on footage from the store identified you as the thief. But you didnt steal anything; the software pointed cops to the wrong guy.

Unfortunately this is not a hypothetical. This happened three years ago to Robert Williams, a Black father in suburban Detroit.Sadly Williams story is not a one-off. In a recent case of mistaken identity, facial recognition technology led to the wrongful arrest of a Black Georgian for purse thefts in Louisiana.

Ourresearch supports fears that facial recognition technology (FRT) can worsen racial inequities in policing. We found that law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms training data sets, a belief that these programs are infallible and a tendency of officers own biases to magnify these issues.

While no amount of improvement will eliminate the possibility of racial profiling, we understand the value of automating the time-consuming, manual face-matching process. We also recognize the technologys potential to improve public safety. However, considering the potential harms of this technology, enforceable safeguards are needed to prevent unconstitutional overreaches.

FRT is an artificial intelligencepowered technology that tries to confirm the identity of a person from an image. The algorithms used by law enforcement are typically developed by companies like Amazon, Clearview AI and Microsoft, which build their systems for different environments.Despite massive improvements in deep-learning techniques, federal testing shows that most facial recognition algorithms perform poorly at identifying people besides white men.

Civil rights advocates warn that the technology struggles to distinguish darker faces, which will likely lead to more racial profiling and more false arrests. Further, inaccurate identification increases the likelihood of missed arrests.

Still some government leaders, including New Orleans Mayor LaToya Cantrell, tout this technology's ability to help solve crimes. Amid the growing staffing shortages facing police nationwide, some champion FRT as a much-needed police coverage amplifierthat helps agencies do more with fewer officers. Such sentiments likely explain why more than one quarter of local and state police forces and almost half of federal law enforcement agencies regularly access facial recognition systems, despite their faults.

This widespread adoption poses a grave threat to our constitutional right against unlawful searches and seizures.

Recognizing the threatto our civil liberties, cities like San Francisco and Boston banned or restricted government use of this technology. At the federal level President Bidens administration released the Blueprint for an AI Bill of Rights in 2022. While intended to incorporate practices that protect our civil rights in the design and use of AI technologies, the blueprints principles are nonbinding. In addition, earlier this year congressional Democrats reintroduced the Facial Recognition and Biometric Technology Moratorium Act. This bill would pause law enforcements use of FRT until policy makers can create regulations and standards that balance constitutional concerns and public safety.

The proposed AI bill of rights and the moratorium are necessary first steps in protecting citizens from AI and FRT. However, both efforts fall short. The blueprint doesnt cover law enforcements use of AI, and the moratorium only limits the use of automated facial recognition by federal authoritiesnot local and state governments.

Yet as the debate heats up over facial recognitions role in public safety, our research and others show how even with mistake-free software, this technology will likely contribute to inequitable law enforcement practices unless safeguards are put in place for nonfederal use too.

First, the concentration of police resources in many Black neighborhoods already results in disproportionate contact between Black residents and officers. With this backdrop, communities served by FRT-assisted police are more vulnerable to enforcement disparities, as the trustworthiness of algorithm-aided decisions is jeopardized by the demands and time constraints of police work, combined with an almost blind faith in AI that minimizes user discretion in decision-making.

Police typically use this technology in three ways: in-field queries to identify stopped or arrested persons, searches of video footage or real-time scans of people passing surveillance cameras. The police upload an image, and in a matter of seconds the software compares the image to numerous photos to generate a lineup of potential suspects.

Enforcement decisions ultimately lie with officers. However, people often believe that AI is infallible and dont question the results. On top of this using automated tools is much easier than making comparisons with the naked eye.

AI-powered law enforcement aids also psychologically distance police officers from citizens. This removal from the decision-making process allows officers to separate themselves from their actions. Usersalso sometimes selectively follow computer-generated guidance, favoring advice that matches stereotypes, including those about Black criminality.

Theres no solid evidence that FRT improves crime control. Nonetheless, officials appear willing to tolerate these racialized biases as cities struggle to curb crime.This leaves people vulnerable to encroachments on their rights.

The time for blind acceptance of this technology has passed. Software companies and law enforcement must take immediate steps towards reducing the harms of this technology.

For companies, creating reliable facial recognition software begins with balanced representation among designers. In the U.S. most software developers are white men. Research shows the software is much better at identifying members of the programmers race. Experts attribute such findings largely to engineers unconscious transmittal of own-race bias into algorithms.

Own-race bias creeps in as designers unconsciously focus on facial features familiar to them. The resulting algorithm is mainly tested on people of their race. As such many U.S.-made algorithms learn by looking at more white faces, which fails to help them recognize people of other races.

Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.

We believe that the companies that make these products need to take staff and image diversity into account. However, this does not remove law enforcements responsibility. Police forces must critically examine their methods if we want to keep this technology from worsening racial disparities and leading to rights violations.

For police leaders, uniform similarity score minimums must be applied to matches. After the facial recognition software generates a lineup of potential suspects, it ranks candidates based on how similar the algorithm believes the images are. Currently departments regularly decide their own similarity score criteria, which some experts contend raises the chances for wrongful and missed arrests.

FRTs adoption by law enforcement is inevitable, and we see its value. But if racial disparities already exist in enforcement outcomes, this technology will likely exacerbate inequities like those seen in traffic stops and arrests without adequate regulation and transparency.

Fundamentally police officers need more training on FRTs pitfalls, human biases and historical discrimination. Beyond guiding officers who use this technology, police and prosecutors should also disclose that they used automated facial recognition when seeking a warrant.

Although FRT isnt foolproof, following these guidelines will help defend against uses that drive unnecessary arrests.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those ofScientific American.

See the original post:
Police Facial Recognition Technology Can't Tell Black People Apart - Scientific American

Porsche Taycan Gets EV Charging Station Finder in Apple Maps – Car and Driver

Porsche has added integration for Apple Maps to include charger locations for U.S. Taycan models, giving CarPlay users yet another reason to stick with the software. The car was already equipped with Porsche's native charging planner, which can suggest stops based on information like the vehicle's state of charge (SOC), expected traffic conditions, and average speed. But the reality is that most owners seem to prefer third-party software like Apple CarPlay and Android Auto. As for Android, a Porsche spokesperson told Car and Driver that the Taycan does come with Android Auto capability as standard, but it doesn't have the EV SOC integration or charge stop suggestions that the new CarPlay system does.

Porsche

The new integration means that Taycan owners won't need to leave CarPlay or settle for using the native navigation system when trying to map out charging stops. On top of doing a lot of the same quality-of-life things the native system does (like analyze SOC and expected traffic), the Apple system can also analyze elevation changes along a given route to get a more accurate estimation of battery usage. According to Porsche, if you allow the vehicle's SOC to deplete to a low enough margin, the new software will automatically offer a route to the nearest compatible charging station.

The system relies on both CarPlay and the information fed to it from the vehicle. That means the normal Apple Maps app on your phone won't give the same charging recommendations. The system should work with any Taycan, but according to Porsche, any models from 2021 or earlier will need to go to a service center for a free software update. Porsche also provided a link for setup and FAQs for the software, which can be found here.

This content is imported from poll. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Associate News Editor

Jack Fitzgeralds love for cars stems from his as yet unshakable addiction to Formula 1. After a brief stint as a detailer for a local dealership group in college, he knew he needed a more permanent way to drive all the new cars he couldnt afford and decided to pursue a career in auto writing. By hounding his college professors at the University of Wisconsin-Milwaukee, he was able to travel Wisconsin seeking out stories in the auto world before landing his dream job at Car and Driver. His new goal is to delay the inevitable demise of his 2010 Volkswagen Golf.

See the rest here:
Porsche Taycan Gets EV Charging Station Finder in Apple Maps - Car and Driver

Tesla to roll out free Full Self-Driving software, but there’s a catch. Know here – HT Auto

Tesla is planning to roll out the Full Self-Driving (FSD) software for its consumers for free. Tesla CEO Elon Musk has said that the company plans to offer its customers the FSD for free for one month as a trial. Musk has confirmed via a tweet that all Tesla car owners in North America can avail of a one-month FSD free trial. Also, after that the company will roll out the software for its global consumers in other regions around the world.

By: HT Auto Desk Updated on: 15 May 2023, 13:11 PM

As Tesla is aiming to get more users to sample its much-hyped FSD software, the company believes a one-month free trial will offer the consumers a chance to try and test the technology that is claimed to allow the vehicles to run autonomously without any driver interference, a significantly advanced version of the car manufacturer's existing semi-autonomous driver assisting technology known as Autopilot. Tesla CEO Elon Musk was responding to a tweet from a user who wanted to know when the subscription option for FSD would be released in Canada. The billionaire confirmed that the free trials would be coming soon, paving the way for the subscriptions.

Also Read : Delhi Electric Vehicle Policy to be revised in 2023. What to expect

Currently, Tesla is offering the FSD software's beta version to a select number of consumers. A few days back, Musk hinted that Tesla would roll out the FSD soon once it's fully functional and glitch-free. His latest tweet further indicates that the auto company is nearing a smoother functional FSD to avoid the embarrassment it faced when it rolled out the software for the first time and the technology was found glitchy. Once FSD is super smooth (not just safe), we will roll out a free month trial for all cars in North America. Then extend to rest of world after we ensure it works well on local roads and regulators approve it in that country," Musk wrote in his latest tweet. However, despite hinting at a nearing rollout of the software, Tesla or its CEO has not given a specific timeframe for the launch.

First Published Date: 15 May 2023, 13:11 PM IST

Read more here:
Tesla to roll out free Full Self-Driving software, but there's a catch. Know here - HT Auto