Archive for the ‘Free Software’ Category

Category:Free software – Wikimedia Commons

English: Free software, roughly, is software that grants the four essentials freedoms to use, to study and modify, to copy and to redistribute itself for any purpose. The free software movement is a social movement to protect, for software users, the right of people to control their computers and to cooperate with others, when they choose, as part of a community. The movement was launched by the GNU project in 1983. Eesti: Vaba tarkvara on enam-vhem niisugune tarkvara, mis tuleb lhtekoodiga, mida saab muuta ja edasi jagada. Vaba tarkvara kogukond on sotsiaalne liikumine eesmrgiga toestada tarkvara kasutajatele inimeste igus kontrollida oma arvuteid ja teha teistega koostd, kui nad seda kogukonna osana soovivad. Liikumine algatati GNU projekti poolt 1983. aastal. Vaba tarkvara teatakse ka teiste terminite all, nagu "avatud lhtekoodiga tarkvara", "software libre", "FLOSS" ja "FOSS". Lisamrkuseks niipalju, et "vabavara" ei ole peaaegu kunagi "vaba tarkvara". Franais: Un logiciel libre est un logiciel dont l'utilisation, l'tude, la modification et la distributions sont garanties par une licence dite libre. Suomi: Vapaat ohjelmistot ovat tietokoneohjelmistoja, joiden mukana toimitetaan lhdekoodi jota voi muokata ja levitt eteenpin. Vapaiden ohjelmistojen liike pyrkii turvaamaan ohjelmistojen kyttjien oikeudet hallita tietokoneidensa toimintaa ja toimia yhteisn niin halutessaan. Liike sai alkunsa GNU-projektista vuonna 1983. Vapaisiin ohjelmistoihin viitataan mys nimill "avoimen lhdekoodin ohjelmisto", "FLOSS", "FOSS" ja "VALO". Huom! Freeware eli ilmaisohjelmat eivt melkein koskaan ole vapaita ohjelmistoja.

See the original post:
Category:Free software - Wikimedia Commons

Free PTC Software Downloads | PTC

Free Trials

PTC Navigate

15-day free trial of the PTC Navigate View

Creo Parametric

30-day free trial of Creo Parametric and Creo Flexible Modeling Extension

ThingWorx Studio

90-day free trial of PTC's augmented reality for the industrial enterprise offering

Model-Based Systems Engineering Solution

30-day free trial of Model-Based Systems Engineering

Windchill Quality Solution

30-day free trial of the reliability analysis toolkit

Servigistics Arbortext Editor

30-day free trial of the structured authoring tool

Download the free XMA trial to analyze your Creo models and improve your modeling practices

PTC Mathcad Express

Free engineering calculation software

Creo Sketch

A 2D sketching software download

Creo Elements/Direct Modeling Express

A 3D CAD direct modeling software download

Creo View Express

A 2D and 3D CAD viewer software download

Creo View Mobile

A 3D CAD viewer software download for iPad and iPhone

PTC Mathcad/SolidWorks Integration

CAD Software:

Creo Granite gPlugs for Improved Interoperability

View post:
Free PTC Software Downloads | PTC

Free software is suffering because coders don’t know how to write documentation – TNW

GitHub just published its 2017 Open Source Survey. The popular social coding service surveyed over 5,500 members of its community, from over 3,800 projects on github.com. It also spoke to 500 coders working on projects from outside the GitHub ecosystem.

The Open Source Survey asked a broad array of questions. One that caught my eye was about problems people encounter when working with, or contributing to, open source projects. An incredible 93 percent of people reported being frustrated with incomplete or confusing documentation.

Thats hardly a surprise. There are a lot of projects on Github with the sparsest of descriptions, and scant instruction on how to use them. If you arent clever enough to figure it out for yourself, tough.

Thats unfortunate. People dont quite realize how vital documentation is to the success of a project.

Mike Pope, a well-respected technical writer, once summed up the need for documentation thusly:

Weve been known to tell a developer If it isnt documented, it doesnt exist. Not only does it have to be docd, but it was to be explained and taught and demonstrated. Do that, and people will be excited not about your documentation, but about your product.

I came across another brilliant quote about documentation from Stack Overflow founder Jeff Attwoods blog, this time by JavaScript developer Nicholas Zakas.

Lack of documentation. No matter how wonderful your library is and how intelligent its design, if youre the only one who understands it, it doesnt do any good. Documentation means not just autogenerated API references, but also annotated examples and in-depth tutorials. You need all three to make sure your library can be easily adopted.

But beyond the practical reasons for documentation, theres also the argument that it builds a sense of community. Not only do you know who your fellow collaborators are, and what theyve accomplished, theres also a clearly-defined sense of mission and purpose.

Heres how the Open Source Survey explained it (emphasis theirs):

Documentation helps create inclusive communities. Documentation that clearly explains a projects processes, such as contributing guides and codes of conduct, is valued more by groups that are underrepresented in open source, like women.

According to the Github Open Source Survey, 60 percent of contributors rarely or never contribute to documentation.And thats fine.

Documenting software is extremely difficult. People go to university to learn to become technical writers, spending thousands of dollars, and several years of their life. Its not really reasonable to expect every developer to know how to do it, and do it well.

And then theres the fact that twenty-five percent of open source contributors say they read and write English less than very well.

But theres a golden opportunity here.Id love to see the thought leaders in the industry Google and Github, if I have to point a finger step up.

Google just launched a free online course, trying to tempt language experts to become localizers. Why cant it do the same for writers, in order to teach them the skills required to write about software?

Similarly, GitHub couldlaunch a course aimed at introducing writers with no previous software development experience to Git.

Not only would this help solve the documentation drought, but it would also be a loud demonstration that you dont have to be a developer to contribute to open source.

Read next: Uber and Lyft are destroying Austin's driver-friendly rideshare economy

Visit link:
Free software is suffering because coders don't know how to write documentation - TNW

Google ends support for its Nik Collection photo editing software – TechCrunch


TechCrunch
Google ends support for its Nik Collection photo editing software
TechCrunch
Google rightly took the plaudits when it made its Nik Collection photo editing software available for free last year removing the product's $149 price tag, which was once as high as $500 but unfortunately it looks like that move was a prelude to ...
Google Abandons Its Nik Collection of Popular Photo Editing SoftwarePetaPixel (blog)
Google Abandons Its Free Photo Editing ToolsMakeUseOf
Google Is Ending Support For The Nik Collection Photo ToolsAndroid Headlines

all 17 news articles »

Read more here:
Google ends support for its Nik Collection photo editing software - TechCrunch

Software simplified – Nature.com

Project Twins

In 2015, geneticist Guy Reeves was trying to configure a free software system called Galaxy to get his bioinformatics projects off the ground. After a day or two of frustration, he asked members of his IT department for help. They installed Docker, a technology for simulating computational environments, which enabled him to use a special version of Galaxy that came packaged with everything he needed called a container. A slight tweak to the Galaxy settings, and he was done before lunch.

Reeves, at the Max Planck Institute for Evolutionary Biology in Pln, Germany, is one of many scientists adopting containers. As science becomes ever more data intensive, more software is being written to extract knowledge from those data. But few researchers have the time and computational know-how to make full use of it. Containers, packages of software code and the computational environment to run it, can close that gap. They help researchers to use a wider array of software, accelerate experiments and promote reproducibility.

Containers are essentially lightweight, configurable virtual machines simulated versions of an operating system and its hardware, which allow software developers to share their computational environments. Researchers use them to distribute complicated scientific software systems, thereby allowing others to execute the software under the same conditions that its original developers used. In doing so, containers can remove one source of variability in computational biology. But whereas virtual machines are relatively resource-intensive and inflexible, containers are compact and configurable, says C. Titus Brown, a bioinformatician at the University of California, Davis. Although configuring the underlying containerization software can be tricky, containers can be modified to add or remove tools according to the user's need flexibility that has boosted their popularity, he says. I liked the idea of having something that works out of the box, says Reeves.

Lab-built tools rarely come ready to run. They often take the form of scripts or programming source code, which must be processed and configured. Much of the software requires additional tools and libraries, which the user may not have installed. Even if users can get the software to work, differences in computational environments, such as the installed versions of the tools it depends on, can subtly alter performance, affecting reproducibility. Containers reduce that complexity by packaging the key elements of the computational environment needed to run the desired software, including settings and add-ons, into a lightweight, virtual box. They don't alter the resources required to run it if a tool needs a lot of memory, then so too will its container. But they make the software much easier to use, and the results easier to reproduce.

Depending on the software used Docker, Singularity and rkt are popular containers can run on Windows, Mac OS X, Linux or in the cloud. They can package anything from a single process to a complex environment such as Galaxy. These tools can interact with each other, sharing data or building pipelines, for instance. Because each application resides in its own box, even tools that would ordinarily conflict with each other can run harmoniously.

Docker uses executable packages, called images, which include the tool to be contained as well as the developer's computational environment. To create a Docker image, a developer creates a configuration file with instructions on how to download and build all the required tools inside it. He or she then 'runs' the file to create an executable package. All the user then needs to do is retrieve the package and run it. Other tools can also generate images. The Reprozip program, for example, assembles Docker-compatible packages by watching as software tools run and tracing the input files and software libraries that the tool requires.

Deborah Bard, a computer scientist at the National Energy Research Scientific Computing Center in Berkeley, California, helps researchers to install their software on the lab's supercomputer. She recalls spending three or four days installing a complex software pipeline for telescope simulation and analysis. Using containers cut this time down to hours. You can spend your time doing science instead of figuring out compiler versions, she says.

For Nicola Mulder, a bioinformatician at the University of Cape Town in South Africa, containers help her to synchronize a cross-border bioinformatics network she runs in Africa, called H3ABioNet. Not all African institutions have access to the same computational resources, she explains, and Internet connectivity can be patchy. Containers allow researchers with limited resources to access the tools that they otherwise might not be able to.

They also allow researchers with sensitive genomic data to collaborate and compare findings without actually sharing the underlying data, Mulder says. And, if researchers at one site obtain different results from their colleagues at another, the standardization the containers provide could eliminate one of the reasons why.

Although computer scientists have multiple options for container platforms, Docker, which is an open-source project launched in 2013, is perhaps the most popular among scientists. It has a large registry of prebuilt containers and an active online community that competitors have yet to match. But many administrators of high-performance computing systems preclude Docker use because it requires high-level administrative access privileges to run. This type of access may allow users to copy or damage anything on the system. An add-on to the fee-based enterprise edition allows users to sidestep that requirement, but it is not available with the free, community edition. They can, however, use a different containerization tool such as Shifter, which doesn't require full privileges, or root access, but still supports Docker images.

The requirement for root access is the biggest obstacle to widespread adoption of Docker, Brown explains. Many academics run bioinformatics tools on high-performance computing clusters administered by their home institutions or the government. Of course, they don't have administrative privileges on most of those systems, he says. Brown spends about US$50,000 annually for cloud computing time on Amazon Web Services, but he says this represents just one-third of his computing work; the rest is carried out on a cluster at Michigan State University, where he lacks root-level access. As a result, Brown creates Docker containers of his tools for distribution, but can rarely use them himself.

Researchers can access Docker images either from the platform's own hosting service, Docker Hub, or from registries of containers such as BioContainers and Dockstore, which allow the sharing of tools vetted by other scientists. Brian O'Connor at the University of California, Santa Cruz, who was the technical lead for the Dockstore registry, recommends that scientists look through container registries to find a tool that works for their project instead of trying to reinvent something that already exists.

But actually getting the underlying Docker software to run properly can be challenging, says Simon Adar, chief executive of Code Ocean in New York, an online service that aims to simplify the process. It's too technical, it was designed for developers to deploy complex systems. The service, launched in February, creates what Adar calls compute capsules, which comprise code, data, results and the Docker container itself. Researchers upload their code and data, and then either execute it in a web browser or share it with others no installation required. Adar likens the process to sharing a YouTube video. The company even offers a widget that enables users to embed executable code in web pages.

Shakuntala Baichoo, a computer scientist at the University of Mauritius in Moka, learned about containers at a communal programming event, called a hackathon, organized by H3ABioNet. Previously, she spent hours helping collaborators install her tools. In making the tools easier to install, she says, containers not only free up her time, but they might also encourage scientists to test them and provide feedback.

At CERN, the particle-physics laboratory near Geneva, Switzerland, scientists use containers to accelerate the publication process, says physicist Kyle Cranmer at New York University who works on CERN's ATLAS project, which searches for new elementary particles. When physicists run follow-up studies, they have to dig up code snippets and spend hours redoing old analyses; with containers, they can package ready-to-use data analysis workflows, simplifying and shortening the process.

Cranmer says that although much of the debate around reproducibility has focused on data and code, computing environments themselves also play a big part. It's really essential, he says. One study of an anatomical analysis tool's performance in different computing environments, for example, found that the choice of operating system produced a small but measurable effect (E. H. B. M. Gronenschild et al. PLoS ONE 7, e38234; 2012).

But containers are only as good as the tools they encapsulate, says Lorena Barba, a mechanical and aerospace engineer at George Washington University, Washington DC. If researchers start stuffing their bad code into a container and pass it on, we are foredoomed to failure. And, says Brown, without pressure from funding agencies and journals, containers are unlikely to make researchers suddenly embrace computational reproducibility.

Indeed, few researchers are using containers, says Victoria Stodden, a statistician at the University of Illinois at UrbanaChampaign who studies computational reproducibility. In part that's because of a lack of need or awareness, but it is also because they might not have the computer skills needed to get going.

Behind the scenes, however, that could be changing. Companies such as Google and Microsoft already run some software in containers, says Jonas Almeida, a bioinformatician at Stony Brook University, New York. Large-scale bioinformatics projects may not be far behind. The cloud-based version of Galaxy will eventually run inside containers by default, says Enis Afgan, a computer scientist at Johns Hopkins University in Baltimore, Maryland, who works on Galaxy.

In 510 years, Almeida predicts, scientists will no longer have to worry about downloading and configuring software; tools will simply be containerized. It's inevitable, he says.

The rest is here:
Software simplified - Nature.com