Archive for the ‘Free Software’ Category

Should You Invest in the Invesco Dynamic Software ETF (PSJ)? – Yahoo Finance

The Invesco Dynamic Software ETF (PSJ) was launched on 06/23/2005, and is a passively managed exchange traded fund designed to offer broad exposure to the Technology - Software segment of the equity market.

Retail and institutional investors increasingly turn to passively managed ETFs because they offer low costs, transparency, flexibility, and tax efficiency; these kind of funds are also excellent vehicles for long term investors.

Additionally, sector ETFs offer convenient ways to gain low risk and diversified exposure to a broad group of companies in particular sectors. Technology - Software is one of the 16 broad Zacks sectors within the Zacks Industry classification. It is currently ranked 8, placing it in top 50%.

Index Details

The fund is sponsored by Invesco. It has amassed assets over $481.40 million, making it one of the average sized ETFs attempting to match the performance of the Technology - Software segment of the equity market. PSJ seeks to match the performance of the Dynamic Software Intellidex Index before fees and expenses.

The index is comprised of stocks of software companies. The Index is designed to provide capital appreciation by thoroughly evaluating companies based on a variety of investment merit criteria, including fundamental growth, stock valuation, investment timeliness and risk factors.

Costs

Investors should also pay attention to an ETF's expense ratio. Lower cost products will produce better results than those with a higher cost, assuming all other metrics remain the same.

Annual operating expenses for this ETF are 0.58%, making it on par with most peer products in the space.

It has a 12-month trailing dividend yield of 0.09%.

Sector Exposure and Top Holdings

Even though ETFs offer diversified exposure that minimizes single stock risk, investors should also look at the actual holdings inside the fund. Luckily, most ETFs are very transparent products that disclose their holdings on a daily basis.

This ETF has heaviest allocation in the Information Technology sector--about 63.30% of the portfolio. Telecom and Healthcare round out the top three.

Looking at individual holdings, Docusign Inc (DOCU) accounts for about 5.79% of total assets, followed by Snap Inc (SNAP) and Adobe Inc (ADBE).

The top 10 holdings account for about 47.53% of total assets under management.

Performance and Risk

So far this year, PSJ has gained about 14.95%, and is up roughly 22.49% in the last one year (as of 06/26/2020). During this past 52-week period, the fund has traded between $74.11 and $115.05.

The ETF has a beta of 1.07 and standard deviation of 26.32% for the trailing three-year period, making it a high risk choice in the space. With about 32 holdings, it has more concentrated exposure than peers.

Alternatives

Invesco Dynamic Software ETF holds a Zacks ETF Rank of 1 (Strong Buy), which is based on expected asset class return, expense ratio, and momentum, among other factors. Because of this, PSJ is an excellent option for investors seeking exposure to the Technology ETFs segment of the market. There are other additional ETFs in the space that investors could consider as well.

SPDR SP Software Services ETF (XSW) tracks S&P Software & Services Select Industry Index and the iShares Expanded TechSoftware Sector ETF (IGV) tracks S&P North American Technology-Software Index. SPDR SP Software Services ETF has $240.74 million in assets, iShares Expanded TechSoftware Sector ETF has $4.69 billion. XSW has an expense ratio of 0.35% and IGV charges 0.46%.

Bottom Line

To learn more about this product and other ETFs, screen for products that match your investment objectives and read articles on latest developments in the ETF investing universe, please visit Zacks ETF Center.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free reportInvesco Dynamic Software ETF (PSJ): ETF Research ReportsAdobe Systems Incorporated (ADBE) : Free Stock Analysis ReportSPDR SP Software Services ETF (XSW): ETF Research ReportsiShares Expanded TechSoftware Sector ETF (IGV): ETF Research ReportsSnap Inc. (SNAP) : Free Stock Analysis ReportDocuSign Inc. (DOCU) : Free Stock Analysis ReportTo read this article on Zacks.com click here.

Read the rest here:
Should You Invest in the Invesco Dynamic Software ETF (PSJ)? - Yahoo Finance

2021 F-150: Fords new truck has hands-free driving and hybrid options – The Verge

Ford has revealed the 2021 model of the F-150, and many buyers will find the newest version of the automakers super popular pickup truck stuffed with technology when it gets released later this year. The new F-150 will have an optional hands-free driving mode, be capable of receiving over-the-air software updates, and come with wireless Apple CarPlay and Android Auto. The company will also start selling a hybrid version of the 2021 F-150 that can travel about 700 miles on a full tank of gas.

The hands-free driving feature which Ford calls Active Drive Assist is the same thats coming to the Mustang Mach-E electric SUV next year. That will make the F-150 one of the only vehicles in the US that has hands-free driving when the feature is available, along with Cadillacs And GMs Super Cruise-equipped cars and SUVs. Active Drive Assist will only be available on divided highways that Ford has mapped, and customers will have to buy a specific hardware package that enables the feature, which will include a driver monitoring system that uses a driver-facing infrared camera to make sure theyre watching the road. Theyll also have to pay extra for the Active Drive Assist software.

Even if buyers dont opt for Active Drive Assist, the new F-150 will still offer the rest of the features that make up Fords Co-Pilot360 driver assistance system. Some will come standard, like automatic emergency braking and pedestrian detection which is good news considering the behemoth size of the F-150. Others will cost extra, like adaptive cruise control, lane centering, post-collision braking, trailer backup assist, and more.

The bigger versions of the new F-150 will come with a 12-inch landscape touchscreen in the center of the dashboard (as opposed to the vertical display found in Fiat Chryslers competitor, the Dodge Ram), while smaller versions of the truck will have an eight-inch screen. Both will run Sync 4, the newest version of Fords infotainment system. It is one of the only vehicles on the road to offer wireless versions of both Apple CarPlay and Android Auto, and theres a wireless charging pad under the infotainment screen, as well as USB-A and USB-C ports. All models will have a 12-inch digital instrument cluster behind the steering wheel.

The new F-150 can be updated over the air, too. Ford says that wont just be for improvements to the infotainment system. Rather, the company says the updates are bumper-to-bumper, meaning they can support preventative maintenance, reduce repair trips, provide improved performance and ultimately result in more vehicle up-time.

Ford is working on an all-electric version of the F-150 due out in 2022. But in the meantime, its now finally going to offer a hybrid version of the truck, following in the footsteps of General Motors and Fiat Chrysler.

But while those other trucks were mild hybrids that used small electric motors and batteries to increase efficiency, Fords will be a full hybrid powertrain, utilizing the companys fourth-generation version of the technology. A 35kW (47 horsepower) electric motor will pull power from a 1.5kWh battery pack, giving the truck a boost during acceleration and allowing it to get around 700 miles on a 30.6-gallon tank of gas, roughly 23 miles per gallon.

Unfortunately, Ford isnt releasing official fuel economy figures just yet, so its hard to say exactly how much more efficient the hybrid version will be compared to its gas-only counterparts. The hybrid F-150 will be sold in rear-wheel drive and all-wheel drive variants, with each truck marrying the electric components to a twin-turbocharged V6 engine. And the hybrid F-150 does not have a plug, so the onboard battery will be recharged by the energy created when the truck uses the electric motor to help brake.

All told, the new F-150 represents what is likely Fords biggest year-to-year jump when it comes to the technology that powers the companys most popular vehicle segment. Ford says its not just cramming in technology for technologys sake, and that each of the new additions serves a genuine purpose. There are plenty of more straightforward truck feature, and new utility options that should please those who spend the most time inside these vehicles, like an onboard generator and a new center-console work surface that can hold a laptop. The new seats even fold back 180 degrees, so owners who really love their trucks can spend some real downtime inside them, too.

Link:
2021 F-150: Fords new truck has hands-free driving and hybrid options - The Verge

OTF’s Work Is Vital for a Free and Open Internet – EFF

Keeping the internet open, free, and secure requires eternal vigilance and the constant cooperation of freedom defenders all over the web and the world. Over the past eight years, the Open Technology Fund (OTF) has fostered a global community and provided supportboth monetary and in-kindto more than four hundred projects that seek to combat censorship and repressive surveillance, enabling more than two billion people in over 60 countries to more safely access the open Internet and advocate for democracy.

OTF has earned trust over the years through its open source ethos, transparency, and a commitment to independence from its funder, the US Agency for Global Media (USAGM), which receives its funding through Congressional appropriations.

In the past week, USAGM has removed OTFs leadership and independent expert board, prompting a number of organizations and individuals to call into question OTFs ability to continue its work and maintain trust among the various communities it serves. USAGMs new leadership has been lobbied to redirect funding for OTFs open source projects to a new set of closed-source tools, leaving many well-established tools in the lurch.

EFF has maintained a strong relationship with OTF since its inception. Several of our staff members serve or have served on its Advisory Council, and OTFs annual summits have provided crucial links between EFF and the international democracy tech community. OTFs support has been vital to the development of EFFs software projects and policy initiatives. Guidance and funding from OTF have been foundational to Certbot, helping the operators of tens of millions of websites use EFFs tool to generate and install Lets Encrypt certificates. The OTF-sponsored fellowship for Wafa Ben-Hassine produced impactful research and policy analysis about how Arab governments repress online speech.

OTFs funding is focused on tools to help individuals living under repressive governments. For example, OTF-funded circumvention technologies including Lantern and Wireguard are used by tens of millions of people around the world, including millions of daily users in China. OTF also incubated and assisted in the initial development of the Signal Protocol, the encryption back-end used by both Signal and WhatsApp. By sponsoring Lets Encrypts implementation of multi-perspective validation, OTF helped protect the 227 million sites using Lets Encrypt from BGP attacks, a favorite technique of nation-states that hijack websites for censorship and propaganda purposes.

While these tools are designed for users living under repressive governments, they are used by individuals and groups all over the world, and benefit movements as diverse as Hong Kongs Democracy movement, the movement for Black lives, and LGBTQ+ rights defenders.

OTF requires public, verifiable security audits for all of its open-source software grantees. These audits greatly reduce risk for the vulnerable people who use OTF-funded technology. Perhaps more importantly, they are a necessary step in creating trust between US-funded software and foreign activists in repressive regimes. Without that trust, it is difficult to ask people to risk their lives on OTFs work.

It is not just OTF that is under threat, but the entire ecosystem of open source, secure technologiesand the global community that builds those tools. We urge you to join EFF and more than 400 other organizations in signing the open letter, which asks members of Congress to:

EFF is proud to join the voices of hundreds of organizations and individuals across the globe calling on UGASM and OTFs board to recommit to the value of open source technology, robust security audits, and support for global Internet freedom. These core valueswhich have been a mainstay of OTF's philanthropyare vital to uplifting the voices of billions of technology users facing repression all over the world.

Read more from the original source:
OTF's Work Is Vital for a Free and Open Internet - EFF

Formal methods as a path toward better cybersecurity – Brookings Institution

Five years ago, cybersecurity researchers accomplished a rare feat. A team at the Pentagons far-out research arm, the Defense Advanced Research Projects Agency (DARPA), loaded special software into a helicopters flight control computer. Then they invited expert hackers to break into the software. After repeated attempts, the flight control system stood strong against all attempts to gain unauthorized control.

This outcome was unusual. Experienced hackers who are given direct, privileged access to software almost always find a way in. The reason is simple. Decades after the birth of computer programming, modern software products are riddled with flaws, many of which create security vulnerabilities that attackers can easily exploit to slip through digital defenses. This is why reducing the error rate in software code is essential to turn the tide against relentless computer criminals and foreign adversaries that steal wealth and menace critical infrastructure with relative impunity.

How was DARPAs custom flight control software able to shrug off its assailants? The researchers turned to formal methods, a frequently overlooked group of technologies that programmers can use to create ultra-secure, ultra-reliable software. DARPAs experiment is one of several examples that underscore the potential for formal methods to remake software security. They herald a not-too-distant future when radically safer, more secure software can allow us to embrace other emerging technologies without catastrophic consequences.

Before it is ready for primetime, any piece of software should be able to satisfy at least two criteria:

Because most customers use software as it is intended, software programmers devote most of their attention to satisfying the first criteria: ensuring the software works properly under normal conditions. This is relatively easy to evaluate through user feedback, as customers tend to be vocal when a piece of software obviously misbehaves or omits an advertised feature.

The second dimension is much trickierand the bane of the cybersecurity community. Virtually all software code contains defects that can cause the software to fail in some way. Humans write software, and our species is naturally prone to mistakes. Larger and more complex software applications multiply opportunities for committing and overlooking errors by orders of magnitude. Human minds excel at creating highly capable software (the first criteria), but they are ill-equipped to identify and eliminate software defects.One defect might be harmless, while another might crash the entire program. Others can lead to security failures. These happen when human attackers purposefully exploit defects to cause a specific kind of software failure that achieves their own objectives, such as leaking private data or giving control to them, the attacker.

The software industry has coalesced around two methods for reducing the error rate in software code. The first is simply education and knowledge exchange. Experts collaborate on a global scale to share information about software vulnerabilities and how to fix them. Yet as computing has matured, this body of knowledge has become overwhelming. It is extremely challenging to know when to apply lessons learned and how to verify implementation. Another common method to improve software quality is intensive testing. Yet this can consume massive resources, and most testing only indicates the presence of defectsit cannot prove the absence of them. Given the context of cybersecurity, where attackers actively hunt for any possible flaw that designers overlooked, these two methods have proved insufficient in solving software security.

Formal methods encompass a group of technologies that aim to manage these problems much more effectively by supplementing human resources with computational resources. In the 1970s, when it became clear that computers would become the foundation of military power, the defense community realized it needed greater assurance that its software was of the highest quality and free of security issues. Early programmers knew they could not rely on human judgment alone to ferret out all possible security vulnerabilities. They needed ways to prove that critical pieces of software would not crash by accident or contain unknown security flaws that attackers could exploit. They wanted to maximize confidence that a specific software application would do only what its authorized users intended, and nothing else.

What is the best way to prove an objective truth? Math and logic. The pioneers of formal methods adapted mathematical logic to construct abstract representations of software (I want this program to do X) and then use advanced mathematical theorems to prove that the software code they wrote would only accomplish X.

The term formal methods evolved over time, and today it represents a spectrum of sophistication, from relatively simple instructions for planning a software project to automated programs that function like a super spell-check for code. The method of analysis varies between different formal methods, but it is largely automated and ideally carried out with mathematical precision. Lightweight formal methods are already in widespread use today. Static type theory, for example, has become a standard feature in several programming languages. These methods require no specialized knowledge to use and provide low-cost protection against common software faults.

More sophisticated formal methods can prevent more complex problems. Formal verification is one such example and it enables programmers to prove that their software does not contain certain errors and behaves exactly according to specification. These more advanced methods tend to require specialized knowledge to apply, but with recent advances this bar has been coming down. (For a more detailed description of different types of formal methods, curious readers should read pages 6-9 of this report by the National Institute of Standards and Technology.)

Problems with formal methods and recent innovation

Like the neural networks that revolutionized artificial intelligence, formal methods are a technology undergoing a renaissance after spending decades in the shadows. As software became more complicated, applying the more advanced tools for proving codethe ones that could provide the highest assurance that security vulnerabilities were absentbecame exponentially more difficult. As the National Institute of Standards and Technology explains, formal methods developed a reputation as taking far too long, in machine time, person years and project time, and requiring a PhD in computer science and mathematics to use them. For a long time, formal methods were relegated to mission-critical use cases, such as nuclear weaponry or automotive systems, where designers were willing to devote immense time and resources to creating error free software. But research into formal methods continued, led by a dedicated corps of experts in academia, federal research institutions, and a handful of specialized companies.

More recent developments, including DARPAs helicopter project, suggest formal methods are poised to remake how we design software and transform cybersecurity. In November 2016, the National Institute for Standards and Technology delivered a comprehensive report to the White House recommending alternative ways to achieve a dramatic reduction in software vulnerabilities. Devoting six pages to formal methods, the report noted that formal methods have become mainstream in many behind-the-scenes applications and show significant promise for both building better software and for supporting better testing.

Leading technology companies have quietly rolled out formal methods in their core businesses. Amazon Web Services (AWS), arguably one of the most important infrastructure providers on the planet, has an entire team that uses formal methods to create provable security for its customers. Facebook has shown how formal verification techniques can be integrated into a move fast and break things approach with its INFER system, which continuously verifies the code in every update for its mobile applications. Microsoft has also stood up its own dedicated team on formal verification. As one team member explained last year, Proving theorems about programs has been a dream of computer science for the last 60 years or more, and were finally able to do this at the scale required for an important, widely deployed security-critical piece of software. And it is not just Big Tech. Specialty companies like Galois, Synopsys, and MathWorks are creating a more competitive market for sophisticated formal methods solutions that companies of various sizes can put to work.

Looking forward, the National Science Foundations ongoing DeepSpec Expedition in Computing has demonstrated the applicability of these methods to increasingly complex engineering tasks, including the development of entire operating systems (which tend to be much larger than single applications), database engines, memory managers, and other essential computing and software components. These successes represent a significant step forward for the field, which has long sought to find reliable, low-cost/low-time methods for engineering such components.

These clear signs of progress notwithstanding, the most sophisticated types of formal methodssuch as full-blown formal verificationare still a long way from becoming a go-to tool for the average software developer. The organizations listed above are not representative, of course, and challenges still remain to bring formal methods to the rest of the software industry. We need an ecosystem of tools, more training for working engineers, and more consensus on when to deploy which methods. We also need to begin changing the way software standards committees publish their work; instead of prose, they should begin publishing formal models that allow the application of formal methods. Lastly, we need to begin educating technology decisionmakers about these capabilities and their ramifications.

There are at least two reasons why industry and government should seize on ongoing innovations in the field and accelerate adoption.

First, unlike many cybersecurity measures, proper application of formal methods does not only drive costs up. Since formal methods reduce overall defect count in software, systems built with formal methods can require less maintenance and thus be cheaper to operate than todays ad-hoc alternatives. Additionally, further improvements in automation are expected to provide these benefits without adding significant cost to the initial engineering efforts. Whereas as most security measures drive costs and hurt profit margins, proper use of formal methods can help defeat attackers while improving the bottom line. Even where software is too complicated to use formal verificationthe most robust weapon in the formal methods arsenalmuch more basic formal methods can still lower software lifecycle costs simply by enforcing more rigorous development practices that some software developers know, but dont use.

Second, the steady drumbeat for software liability may soon change the cost calculus for software developers who have traditionally not born all the costs of unreliable, flawed software. The final report issued by Congress Cyberspace Solarium Commission recommended that Congress should pass a law establishing liability for final goods assemblers of software that contains known and unpatched vulnerabilities. Some types of formal methods offer clear opportunities to establish more objective standards of care for determining such liability.

Just one bug in one line of code

Today, we have a global software industry that frequently creates software in an ad-hoc manner, churning out products without truly knowing what is in them, how they might fail, and what will happen if they do. The situation was tolerable when software did not run the world but computing now either controls or informs nearly every aspect of the economy, politics, and social life. And because the individual components that make up a larger software program are interdependent, even a single error in any phase of the manual development processdesign, implementation, testing, evaluation, operation, maintenancecan be catastrophic. One bug in one line of code can create a security vulnerability that spans millions of computer systems, enabling data theft and digital disruption on a massive scale.

Formal methods are not the ultimate answer to cybersecurity. Even their most sophisticated manifestation, formal verification, cannot guarantee perfect security. Neither can the worlds best engineers guarantee that a skyscraper will not collapse. But through rigorous standards, objective testing, and the scientific method, they have achieved an outstanding record. By injecting similar rigor into the software industry, formal methods can, at the very least, give us much higher assurance that digital technology will behave.

Tim Carstens is an adviser at the Cyber Independent Testing Lab.David Forscey is the managing director of the Aspen Cybersecurity Group. He previously worked in the Center for Best Practices at the National Governors Association and Third Way.

Read more here:
Formal methods as a path toward better cybersecurity - Brookings Institution

Modern development – Cohesity: Defining a new rhythm for the paroxysms of data schism – ComputerWeekly.com

This Computer Weekly Developer Network series is devoted to examining the leading trends that go towards defining the shape of modern software application development.

As we have initially discussed here, with so many new platform-level changes now playing out across the technology landscape, how should we think about the cloud-native, open-compliant, mobile-first, Agile-enriched, AI-fuelled, bot-filled world of coding and how do these forces now come together to create the new world of modern programming?

This contribution comes from Ezat Dayeh in his role as SE Manager UK&I at Cohesity the company is known for its capabilities aligned to tackle mass data fragmentation through data management, beginning with backup.

Dayeh writes as follows

What 2020 has shown (given the Covid-19 pandemic, political upheavals and everything else) is that business processes need to be adaptable and flexible if they are going to survive the new ways of working and respond to changing consumer habits and the difficult economic challenges.

Software development cycles have always been pressured to deliver right-first-time production software fast, so dev teams scrum and sprint, but what often holds them back is the ability to access data they need because of infrastructure issues. Data is too often fragmented across a host of silos that cant match the speed needed (and also costs too much) to give developers the tools to make sure their software is fit for purpose.

Cohesitys Dayeh: Say goodbye to monophasic development, say hello to the new mix.

What modern software development needs is an infrastructure that provides an agile, simple and risk-free development environment.

Test data, for example, is a critical part of the software development lifecycle. Development and test teams need enough quality test data to build and test applications in a scenario that reflects the business reality. Since it is a foundational element, changing this variable affects larger processes. Faster access to test data can directly speed up time to release. Higher quality and more realistic test data can reduce the number of defects customers encounter in production software.

Theres always a but (so here it is): traditional test data management infrastructure is misaligned with modern software development.

Monophasic development and monolithic applications have been replaced in favour of more iterative development predicated on microservices. The adoption of DevOps methodology and Agile development practices have evolved to support this trend. However, the underlying problem of data availability has largely remained unresolved.

The legacy approach of traditional test data management delays development and testing, giving that provisioning the relevant data itself can take days or even weeks. And it is a legacy issue its a hard, technical problem to be able to provision data to multiple teams, across multiple geographies, at an ever-growing pace.

But doesnt the public cloud overcome legacy issues?

To an extent the public cloud has accelerated the pace of innovation by bringing elasticity and economics while reducing time to market for new applications. However, it is not a panacea.

Public cloud environments carry over operational inefficiencies from their legacy on-premise environments and introduce several new challenges. For most organisations, their data footprint straddles multiple public clouds and on-prem environments. So test/dev requires data mobility between environments. The advent of dev/test in the cloud adds additional roadblocks, including the misalignment of formats among on-premise and public cloud VMs. This schism leads to manageability strains, presents an impediment to application mobility and often is accompanied by dramatic cost challenges.

A way of speeding up test/dev is to repurpose backup data as test data.

This means you can back up one server or databases, instantly make a clone of it without consuming extra storage and use that clone to help develop applications and software. It doesnt require additional infrastructure and the management overhead is part and parcel of the backup.

This gives teams near-instant and self-service access to up-to-date and zero-cost clones, increasing the speed, quality and cost-effectiveness of development. Or to put it more succinctly: for faster software development, IT teams should look for software solutions that provide instant zero-cost clones.

Data management has a special role to play in terms of modern software application development and it can help define a new rhythm for the paroxysms of data schism.

Can you get with the beat?

Cohesity promises to consolidate data management silos with a single, web-scale solution. (Approved Image Source: Cohesity).

Continue reading here:
Modern development - Cohesity: Defining a new rhythm for the paroxysms of data schism - ComputerWeekly.com