News from TechBeacon
Share |

The best cloud and IT Ops conferences of 2017Open in a New Window

Cloud migration, orchestration and management, IaaS and PaaS pricing models, mobile app delivery, IoT system rollouts, big data analytics systems, regulatory compliance, systems performance, data security—these are some of the trends impacting IT infrastructure and cloud computing right now. A variety of IT pros must stay on top of such fast-changing technology developments to keep their organizations competitive and agile.

Here we offer a selection of conferences scheduled for 2017 that address these and other topics. They feature a variety of sessions, panels, tutorials, workshops, demos, and networking opportunities designed to help businesses with their cloud infrastructure strategies and implementations.

We have ranked them in three categories:

1.    Conferences we consider a "must."

2.    Others that are "worth attending."

3.    A third tier of events that, within their broader scope, have strong cloud and IT Ops content.

Best Practices Guide: Migrating Applications to the Cloud

It’s a hybrid world for cloud and IT ops pros

Enterprises must rise up to the expectations of both their employees and customers for increasingly ubiquitous mobile apps and web services, delivered seamlessly and updated constantly, while making sure that data is protected from malicious hackers. On the technology front, that often means devising a hybrid cloud strategy—running workloads on-premises, in private clouds, and in public clouds—as well as embarking on data center modernization. That includes the adoption of software-defined networking and storage, as well as of adoption ops management automation.

These tasks fall on the shoulders of cloud and IT ops teams. If you count yourself one of them, attendance at one or more conferences this year can bring you up to speed quickly, making you better able to leverage cloud benefits like faster deployment and reduced hardware costs.  

Must attend

What conferences made our A-list this year? Our top selections for cloud and infrastructure conferences is based on the comments we've read by attendees, including conference presenters and other SMEs. We also consider conference growth by attendance, year after year.

 

AWS re:Invent

Twitter: @AWSreInvent / @awscloud / #reInvent
Web: https://reinvent.awsevents.com
Date: Nov. 27 – Dec. 1
Location: Las Vegas
Cost: $1,299 (2016 prices)

Once synonymous with e-commerce, Amazon has become a major provider of platform and infrastructure cloud computing services (PaaS and IaaS) for enterprises, startups, and developers of all stripes via its Amazon Web Services division, competing with Google, IBM, Microsoft, and others. AWS re:Invent is the AWS annual user conference, featuring keynote speeches, training sessions, certification opportunities, technical sessions, an expo floor, and networking activities.

This year’s event will feature James Hamilton, Andy Jassy, and Werner Vogels, as well as introductory, advanced, and expert level sessions on a wide variety of subjects.

If you're a first-time attendee, read the FAQ before planning your trip, so you can make the best use of your time and ensure you hit the sessions and events you’re most interested in.

Writing for the Raygun blog, 2016 attendee Jesse James had this to say about reInvent: “AWS Re:Invent 2016 was a huge event and covered much more ground than any one developer could hope to take in over a week.”

He also reports that “AWS re:Invent 2016 continued to address the tremendous growth of attendees over previous years by increasing the number of sessions and adding additional venue locations. Even with those changes the conference was still packed to the brim with attendees clamouring to get access to standing-room-only sessions that had been booked solid week in advance. Despite the large amount of attendees and sessions, everyone was still overwhelming friendly, welcoming, and up for a quick chat about anything tech related.”

Who should attend? AWS customers, developers and engineers, system administrators, systems architects

Gartner Catalyst Conference

Twitter: @Gartner_Events / #GartnerCAT
Web: http://www.gartner.com/events/na/catalyst#
Date:  August 21-24
Location:  San Diego, California
Cost: Ranges from $3,100 (early bird) to $3,400, with a special $2,900 price for public sector attendees (eligibility to be verified).

Featuring more than 50 Gartner analysts, Catalyst promises a "deep dive” into the digital enterprise’s architectural requirements, touching on areas such as mobility strategy and execution, cloud architecture, data analytics, enterprise-scale security and identity, software-defined data centers (SDDC), DevOps, and digital productivity via mobile and cloud. Gartner has described Catalyst as “technically focused and committed to pragmatic, how-to content” so that attendees go back to their places of work “with a blueprint for project planning and execution.”

Here’s an extended quote from blogger Jason Dover, from Kemp.

“Yesterday was a great start to the conference with the tone being expertly set in the opening keynote by Kyle Hilgendorf, Kirk Knoernschild and Drue Reeves. The big theme is how to architect and leverage technology for on-demand digital business transformation. Because of that, the week is packed full of sessions on IoT, planning for the scale of billions of connected things, using cloud to help mitigate attacks against an expanding attack surface and of course, containers. Kyle, Kirk and Drue highlighted that with the new ways technology is being applied, there is an intrinsic need for capabilities to sense and adapt in real time based on individual events as well as near real time based on aggregate data.

“As an example, an autonomous car needs to brake in milliseconds without sending queries to a backend and waiting for a response as we’re familiar with in traditional system architecture. Aggregate data may include whether service inputs as well as telemetry from vehicles ahead in traffic that are engaging their traction control systems, indicating icy conditions and resulting in an action that has a meaningful positive impact in your vehicle. However, dealing with these type of workflows and the growing number of connected things at scale can be challenging with traditional infrastructure planning principles.” You can read more of his comments on the 2016 conference.

Writing in her blog for Capterra, Jennifer Champagne included Catalyst in the top nine must-attend events, and she described the the 2016 conference as follows: “If you want the lowdown on hot topics in tech like cloud computing, mobility, and IT management software, Gartner’s Catalyst Conference is for you. It not only teaches you about the potential of these technologies, but gives you practical advice and solutions for today’s IT professionals.”

The updated website for 2017 states: “Our 2017 agenda offers 8 in-depth tracks providing attendees a deep dive into a broad range of topics. From cloud computing, apps, mobility, data and analytics to security and identity, we have coverage for every technical professional.”

Who should attend? Technical professionals in roles including applications, business intelligence, infrastructure and operations, security and risk

Hadoop Summit/Dataworks Summit

Twitter: @DataWorksSummit / #DWS17
Web: https://dataworkssummit.com/munich-2017/
Date: April 5-6
Location: Munich, Germany
Cost: Ranges from early-bird Expo only €399 + VAT to all-access onsite €1000 + VAT

This year marks the tenth year of this conference, now renamed the “Dataworks Summit.” According to the Perficient blog post, “At the end of the last keynote at Hadoop Summit 2016, Herb Cunitz (President of Hortonworks) announced that … next year’s conference will be called Dataworks Summit. First question, will we still get the fun but mildly scary 3D elephant render?”

Good question. But the bigger question in the living room is “why should you attend?” Here are a few answers.

The official website claims “you will learn how data is transforming business and the underlying technologies that are driving that change.” Of course, they’ll say things like that. But what did last year’s attendees think?

Commenting on the 2016 event (Hadoop Summit), Becky Mendenhall notes in her blog: “This year was by far and away our best experience yet, and it wasn’t because of the food (sorry, San Jose Expo Center). Nope, the reason that we keep coming back is because it confirms in our minds the fact Hadoop is growing every year, not just in the number of people who are interested and/or talking about it, but actual production users. With each conference, more attendees pack the halls, and more sessions are added to the agenda. The topics get more technical, and the number of customers speaking about their specific use cases grows.”

Fernanda Tavares, director of software development at Syncsort, said that the 2016 event “was a great way to celebrate 10 years of Hadoop. There were over four thousand attendees, over 170 sessions and lots of new sponsors.” She noted that “announced the concept of Assemblies, which will allow customers and vendors to package end-user applications such as fraud detection using Docker, and deploy them through Ambari.” And she described some of the event’s loftier goals: “Microsoft talked about projects to improve children’s education in India by predicting school drop-outs. They also talked about solving world hunger by predicting the best time to sow crops, and crowd-sourcing the measurement of radiation levels. Arizona State University talked about improving breast cancer diagnosis.” 

Who should attend? IT pros working with Hadoop in areas like data analytics, security, app development, architects, storage managers

Strata+Hadoop World Conference

Twitter: @strataconf / @OReillyMedia / #StrataHadoop
Web: http://conferences.oreilly.com/strata/hadoop-big-data-ca
Dates: March 13-14 Training; March 14-16 Tutorials & Conference
Location: San Jose, California
Cost: Conference passes range from $2,395 to $1,595, depending on the type of ticket, while training passes are either $3, 495 or $2,145. A variety of discounts are available.

Note: There are three additional Strata+Hadoop conferences for 2017. See details for conferences in London, New York, and Singapore.

This conference is presented by O’Reilly Media and Cloudera, one of the biggest providers of data management software for Hadoop, the open source framework for storing and processing very large data sets in clusters. Organizers offer a mix of deep technical immersion and business use in verticals such as finance, media, retail, transportation, and government. It will feature almost 200 sessions, a “hallway track,” networking opportunities, and after-hours entertainment.

The 2016 event hosted more tha 7,000 people who heard keynote speakers, including White House chief data scientist DJ Patil, describe where they see machine learning, analytics, the Internet of Things, autonomous vehicles and smart cities taking us in the near future.

Writing for Forbes, Bernard Marr described Patil’s talk, about how Big Data and analytics is helping to reduce the human toll of opioid abuse in the United States. “When the president first started in office there was about 10 [open] data sets put out there, now there are about 2,000”, he said. Whatever you think of Barack Obama’s presidency that is an impressive achievement, as it means that anyone from major corporations to armchair data scientists can now use data to develop new strategies and technologies to harness it.”

Commenting on the 2016 event, Rob Rosen calls Strata+Hadoop World “one of my favorite semi-annual Big Data events. I’ve attended Strata so many times that I’ve lost count, and there’s no better way to validate the transformational nature of Big Data than to witness how the emphasis of each conference has changed over the years.

“Five years ago, the bulk of the conference focused on ‘What is Hadoop,’ to help attendees understand the components of this disruptive new technology stack.  A few years later, ‘Developments in Big Data’ was the theme, a result of many different entities joining forces to tackle some of the biggest challenges surrounding the adoption of Hadoop and NoSQL technologies in the field.”

Who should attend? Business decision-makers, strategists, architects, developers, data scientists, data analysts, CxOs, VCs, entrepreneurs, product managers, marketing pros, researchers

Cloud Computing Expo

Twitter: @CloudExpo / @SYSCONmedia / #CloudExpo
Web: http://www.cloudcomputingexpo.com/
Dates / Location:
June 6-8, Javits Center, New York, New York
October 31-Nov 2, Santa Clara Convention Center, Santa Clara, California
Cost: Early bird prices rise gradually by month from late January to June. Gold, premium, and Expo Plus rates are available. For the complete breakdown of pricing options, see the registration page.

Announced with the phrase “The world of cloud computing all in one place!”, this event promises to explore the entire world of enterprise cloud computing—private, public, and hybrid scenarios. It will address the latest on topics including IoT, big data, containers, microservices, DevOps, and WebRTC via keynotes, general sessions, breakout sessions, panels, and an expo floor.

Lauren Cooke in Cloud Solutions News wrote about the 2016 event: “Cloud computing is now being embraced by a majority of enterprises of all sizes. The opportunity for professionals to meet and collaborate, to support and augment how their business can leverage cloud capabilities is imperative.”

The official website states: “With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to learn about the latest technology developments and solutions. Cloud Expo offers a vast selection technical and strategic Industry Keynotes, General Sessions, Breakout Sessions, and signature Power Panels. The exhibition floor features exhibitors offering specific solutions and comprehensive strategies. The floor also features a Demo Theater that give delegates the opportunity to get even closer to the technology they want to see and the people who offer it.”

 Who should attend? CEOs, CIOs, CTOs, directors of infrastructure, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, and communications and networking specialists

Worth attending

We know that some of the conferences in this second category may be for some of our readers “must attend” events, especially those that appear to be growing in size each year. Generally, these are conferences that are smaller in attendance, or they are targeted at specific industries.

Interop Las Vegas

Twitter: @interop / #InteropITX
Web: http://www.interop.com/lasvegas/
Date: May 15-19
Location: MGM Grand, Las Vegas, Nevada
Cost: Ranges from $3,299 to $249, depending on levels of conference access you desire.

A venerable tech conference, Interop delves into topics like applications, cloud computing, collaboration, networking, IT leadership, security, software-defined networking, storage, virtualization and data center architecture, and mobility.

Gartner’s Symposium/ITxpo

Twitter: @Gartner_Events / #ITxpo / #GartnerSYM
Web: http://www.gartner.com/events/na/orlando-symposium
Date: Oct. 1-5
Location: Orlando, Florida
Cost: Standard conference price is $5,750. Public-sector price is $4,200. Group discounts are available.

This is the mother of all Gartner conferences, aimed specifically at CIOs and technology executives in general, addressing topics—from an enterprise IT perspective—such as mobility, cybersecurity, cloud computing, application architecture, application development, IoT, and digital business.

Oracle OpenWorld

Twitter: @oracleopenworld
Web: https://www.oracle.com/openworld/index.html
Date: Oct.1-5       
Location: San Francisco, California
Cost: Not available

Oracle’s biggest event of the year, OpenWorld draws tens of thousands of customers, partners, and Oracle executives from around the world eager to hear the latest about the company’s products, including its databases and business applications.

Who should attend? Oracle customers, partners, developers, IT Ops pros

Salesforce Dreamforce

Twitter: @salesforce / @Dreamforce / #dreamforce
Web: www.salesforce.com/form/dreamforce/prereg/
Date: Nov. 6-9LocationSan Francisco, California
Cost: 2016 prices were $1,199 for early registration. .

Sponsored by Salesforce, the 2016 conference featured more than 1,400 breakout sessions, along with more than 400 exhibitors at the expo floor, hands-on training sessions, networking opportunities and  “the biggest names in music” at the Dreamforce event.

Who should attend? Salesforce customers from companies of all sizes and industries

Gartner Data Center, Infrastructure & Operations Management Conference

Twitter: @Gartner_Events / #GartnerDC
Web: http://www.gartner.com/events/na/data-center
Date: Dec. 4-7
Location: Las Vegas, Nevada
Cost: In 2016, public-sector attendees paid $2,750; other attendees paid $2,950 for early bird pricing, or $3,150 thereafter.

Organizers aim to provide attendees with practical knowledge for modernizing their infrastructure and operations, touching on topics such as cloud computing, virtualization, automation, DevOps, software-defined systems, and mobile. 

Who should attend? IT pros involved with operations and facilities, servers, storage and backup/recovery, mobile, cloud and desktop virtualization, data center networking

Google Cloud Next

Twitter: @googlecloud / #GCPNext  
Web: https://cloudnext.withgoogle.com/
Date: March 8-10
Location: San Francisco, California
Cost: Early bird pricing: $999; Full price starting Jan 17th: $1,499

Google Cloud Next focuses on Google’s IaaS and PaaS cloud computing services for businesses. Tracks include Infrastructure & Operations, App Development, and Data & Analytics. As would be expected, most speakers will be from Google, including Senior VP of Infrastructure Urs Holzle, but customers are also scheduled to speak, including execs from Dropbox, Land O’ Lakes, and Spotify.

Who should attend? IT Ops pros using Google Cloud Platform services

Red Hat Summit

Twitter: @RedHatSummit / #RHSummit
Web: http://www.redhat.com/en/summit
Date: May 2-4
Location: Boston, Massachusetts
Cost:  Not available

The conference will focus on Red Hat's technology strategy and newest products, with participation from the company’s product and technology leaders. There will also be customer panel sessions, technical sessions, and hands-on labs. For 2017, Red Hat Summit focuses on “the individual.”

Who should attend? Sys admins, IT engineers, software architects, VPs of IT, CxOs

VMWorld 2016

Twitter: @VMworld / #VMWorld
Web: https://www.vmworld.com/en/us/index.html?
Date: Aug. 27-31
Location: Las Vegas, Nevada
Cost: $1,795 full conference pass

VMWare’s annual gathering features more than 450 sessions, 250 partners, and almost 24,000 global attendees. The 2016 conference focused on the software defined data center, end-user computing, hybrid cloud, cloud-native applications, DevOps, and technology futures. According to their website, more information will become available when 2017 registration opens.

Who should attend?  Sys admins, IT engineers, software architects, VPs of IT, CxOs

451 Research’s Hosting and Cloud Transformation Summit

Twitter: @451Research / #451HCTS
Web: http://www.451research-hcts.com/
Date: Sept. 18-20
Location: Las Vegas, Nevada
Cost: Ranges from $1,295 to $2,295

The 451 Research summit’s theme for 2016 was “Business Disruption in the Age of Cloud.” The conference caters to “corporate leaders, industry visionaries, IT practitioners, and financial professionals as they learn, network and map out strategies for today's rapidly changing IT landscape.”

Who should attend? Service providers, hardware/software vendors, investors

Dell-EMC World 2017

Twitter: @DellEMCWorld / DellEMCWorld
Web: http://www.dellemcworld.com/index.htm
Date: May 8-11
Location: Las Vegas, Nevada
Cost: Ranges from $2,195 by Feb. 28, to $2,395 onsite

This EMC event is described as “ the premier enterprise technology forum for IT practitioners and business decision makers. We invite you to come see how the Dell Technologies family of businesses will help you reinvent your business, maintain your competitive advantage, and enrich the lives of those you serve.”

Who should attend? IT pros and business managers, EMC customers and partners

Cisco Live

Twitter: @CiscoLive / #CLUS
Web: http://www.ciscolive.com/us/
Date: June 25-29
Location: Las Vegas, Nevada
Cost: Ranges from $3195 (onsite) to $99

This event is Cisco’s annual user conference and, as such, is designed to inform attendees about the latest in the company’s products and technology strategies in areas such as networking, communication, security, and collaboration. The conference draws about 25,000 attendees and 200 exhibitors, and features about 600 sessions.

Who should attend? Cisco customers, both from IT and business areas

Cross-discipline conferences

Conferences in this category are targeted at specific industries or technologies—for example, security, cloud computing, and open source. Although you won’t necessarily see "cloud” or “IT ops” in the conference titles here, we believe these gatherings will hold interest for IT pros involved with infrastructure and operations.

Velocity

Twitter: @velocityconf / @OReillyMedia / #velocityconf
Web: 
http://conferences.oreilly.com/velocity/devops-web-performance-ca
http://velocityconf.com
Location:
San Jose, California, June 19-22
New York, New York, Oct. 1-4London, UK, Oct. 18-20
Cost: Not available

Called “a great show to learn about Web operations, performance, DevOps, and more,” O’Reilly’s Velocity conference showcases smart minds who are putting DevOps to work in a business-driven IT setting. Damon Edwards, founder and managing partner of DTO Solutions, described it in an interview as a “high-quality web operations and web performance conference” that is “very operations-centric.”

If you go, you can expect to experience a technical, performance-minded conference that is operations-centric and on which developers, Ops, and designers converge. 

Who should attend? Developers, operations specialists, IT Ops staff

O’Reilly Software Architecture Conference

Twitter: @OReillySACon / #OReillySACon)
Web: http://conferences.oreilly.com/software-architecture-ny
Dates: Training, April 2-3; Tutorials and conference, April 3-5
Location: Hilton Midtown, New York, New York
Cost: Conference: from $1,445 to $2,145; Training: from $2,595 to $3,545

The O’Reilly Software Architecture Conference is designed to bridge business and technology, aiming to show attendees tradeoffs, technology options, engineering best practices, and "leadership chops." Its goal is to balance the depth and breadth of its new technology content, and touches on topics including microservices, distributed systems, integration architecture, DevOps, business skills, security, optimization, and UX design.

Who should attend? Engineers, developers, tech leads, and managers

Google I/O

Twitter:@googledevs / #GoogleIO
Web: https://9to5google.com/2016/12/23/google-io-2017-moscone-center/
Date: Not available (see note below)
Location: Moscone Center, San Francisco, California
Cost: Not available

Google I/O, first held in 2008, has become one of the most important developer conferences in the world. Like Apple’s WWDC, Google I/O isn’t strictly about mobile, but the event is heavily focused on the Android OS and its ecosystem.

The conference also covers developer tools and APIs for other Google products, services, and platforms, including the enterprise Cloud Platform, consumer online services like Google Play, products for publishers and advertisers like AdSense and Analytics, consumer devices like the Cardboard virtual reality headset, and even some of the company’s “moonshot” projects.

The 9to5Google website states the following: “I/O is usually held in mid-to-late May, but in the past has been held as late as June. This year’s event lasted three days, while the pasttwo Moscone events have only been two days in length. General admission tickets usually cost $900, while a select number of academic ones are available for $300 with valid identification.”

Who should attend? Developers working with Android and with the growing variety of Google web services, mobile apps, and hardware

Microsoft Ignite

Twitter: @MS_Ignite / #MSIgnite
Web: http://www.ignite.microsoft.com/
Date: Sept. 25-29
Location: Orlando, Florida
Cost: Standard ticket price for the 2016 event was $2,220

Microsoft created Ignite in 2014 to consolidate several smaller conferences into a big one: Microsoft Management Summit, Microsoft Exchange Conference, SharePoint Conference, Lync Conference, Project Conference, and TechEd. It covers architecture, deployment, implementation and migration, development, operations and management, security, access management and compliance, and usage and adoption. Although it’s organized by and focuses on Microsoft and its products, it also draws more than 100 vendors who participate in the expo and as session speakers.

Who should attend? Microsoft developers

IT/Dev Connections

Twitter: @devconnections / #ITDevCon
Web: http://blog.devconnections.com/
Date: Oct. 23-26
Location: San Francisco, California
Cost: Tickets for the 2016 event were $1,199 and $1,999

This conference is aimed at developers and IT professionals of all stripes, and focuses on topics like big data and BI, virtualization, DevOps, enterprise management and mobility, cloud and data center, development platforms and tools, and enterprise collaboration. Emphasis is on Microsoft products like Azure, Exchange, SQL Server, and SharePoint, although other vendors are also discussed.

Who should attend? Developers, IT pros

Fusion 17

Web: http://10times.com/its-m
Date: Feb. 19-22
Location: Las Vegas, Nevada
Cost: Tickets range from $2,195 to $2,795, with discounts available.

This event covers IT service management topics, and specifically the benefits and challenges associated with using ITSM when implementing virtualization, cloud computing, mobility, security, SaaS, and other technologies in the enterprise. There is a key track devoted to DevOps and agile topics.

Who should attend? Developers involved with ITSM 

Cloud Computing Expo

Twitter: @CloudExpo / @SYSCONmedia / #CloudExpo
Web: http://www.cloudcomputingexpo.com/
Date/Location: June 6-8, Javits Center, New York, New York; Oct. 31-Nov. 2, Santa Clara Convention Center, Santa Clara, California
Cost: Depending on when it’s bought, a Gold Pass, which gives attendees full access to the proceedings, costs between $995 (best early-bird discount) and $2,500 (onsite).

This conference explores “the entire world” of enterprise cloud computing—private, public, and hybrid scenarios.

Who should attend? Cloud app developers

GlueCon 2017

Twitter: @gluecon / #gluecon
Web: http://gluecon.com/
Date: May 24-25
Location: Omni Interlocken, Broomfield, Colorado
Cost: $795

The conference focuses on what it considers the most important trends in technology, including cloud computing, DevOps, mobile, APIs, and big data, all from the perspective of developers, which organizers view as being at the core and at the vanguard of all these areas.

Who should attend? Developers in general

Monitorama

Twitter: @Monitorama / #monitorama
Web: http://monitorama.com/ 
Date: May 22-24
Location: Portland, Oregon
Cost: $400

As its name implies, Monitorama focuses strictly on software monitoring. It’s narrow in scope by design, with a single track, so that attendees have a cohesive, unified experience, and don’t suffer from “choice overload,” as founder Jason Dixon explains in this blog post detailing the origins and development of the conference. A big effort is made to create an atmosphere of inclusiveness among attendees, all of whom Dixon hopes to make feel welcome. Some have called Monitorama “a great small conference.”

Who should attend? Developers, operations staff, testers, QA pros

 

Surge 17

Twitter: @surgecon / #surgecon
Web: http://surge.omniti.com/2016
Date: Sept. 21-22
Location: Omni Shoreham Hotel, Washington, DC
Cost: $750

Known as the “scalability and performance conference,” Surge is organized by OmniTi, a web app scalability and performance vendor, and features “practitioner-oriented sessions.” Their website calls this event “two days of mind blowing, practitioner-oriented sessions presented by some of the most established professionals in our field. Meet and network in the Omni Shoreham’s historical, intimate setting.”

Who should attend? IT Ops, infrastructure admins, developers, QA pros

Dynatrace’s Perform 17

Twitter: @Dynatrace / #dynatrace / #DynatracePerform 
Web: http://www.dynatrace.com/en/perform.html
Date: Feb. 6-9
Location: Las Vegas
Cost: Conference: $795; Hands-on Training Day: $700; Official conference hotel (Cosmopolitan) room rates: $278.88 per day.

Application performance management vendor Dynatrace organizes this conference, whose tracks in 2015 included “APM in Action,” “Customer Experience,” “Continuous Delivery,” and “Operational Excellence.”

 Who should attend? Developers, IT Ops, testers, QA pros

Agile Testing Days

Twitter: @AgileTD / #AgileTD
Web: www.agiletestingdays.com
Date: Nov. 13-17
Location: Potsdam, Germany
Cost: 2016 prices ranged from €700 to €2,700

Considered one of Europe’s main software testing events, Agile Testing Days is aimed at companies interested in gaining an edge through “early, rapid and iterative application releases.” Judging by reactions from past attendees, the conference offers a mix of fun interludes and serious sessions that make the experience both enjoyable and worthwhile.

NOTE: More details for the 2017 event are not yet available, but there is some information on the website noted above.

Who should attend? Anyone involved with software testing—test managers, designers, analysts, consultants, architects, quality directors—as well as software architects, application developers, IT managers, CIOs, CTOs, software engineers

STAR Software Testing Conferences

Twitter: @TechWell / #StarEast / #StarWest
Web: https://www.techwell.com/software-conferences/star-software-testing-conferences
Dates / Locations:

  • Star East: May 7-12, Rosen Center Hotel, Orlando, Florida
  • Star West: Oct. 1-6, Disneyland Hotel, Anaheim, California
  • Star Canada: Oct. 15-20, Hyatt Regency, Toronto, Canada

Cost: Prices are different for each of these three conferences; price ranges depend on packages and discounts for early-bird registration.

  • Star East: Ranges from $495 to $4,295
  • Star West: Ranges from $495 to $3,295
  • Star Canada: Ranges from $795 to $3,995

These conferences, organized by TechWell, are designed specifically for testing and QA pros, touching on topics such as test management and leadership, software testing techniques, mobile app testing, test automation, certifications, QA methodologies, tools, agile testing, performance testing, exploratory testing, DevOps and software testing, and QA tester careers.

Who should attend? Software and test managers, IT directors, QA managers and analysts, test practitioners and engineers, development managers, developers, CTOs

Google Test Automation Conference

Twitter: @googletesting / @googledevs / #GTAC2016
Web: https://developers.google.com/google-test-automation-conference/?hl=en
Dates: Not available
Location: London
Cost: Not available (past events have been free)

GTAC, first held in 2006, is hosted by Google, draws engineers from industry and academia. and focuses on the latest technologies and strategies in test automation and test engineering. Past conferences have featured speakers from (of course) Google but also from many other companies and universities, including Georgia Tech, Intel, LinkedIn, Lockheed Martin, MIT, Splunk, Twitter, and Uber.

Regarding the 2017 event, the website encourages you to “Subscribe to the Google Testing Blog to receive registration announcements and updates for GTAC 2017, which will be held in London.”

Who should attend? QA and test pros

Software Test Professionals Conference & Expo

Twitter: @SoftwareTestPro / #STPCon
Web: http://www.stpcon.com/
Date: March 14-17
Location: Renaissance Phoenix Downtown, Phoenix, Arizona
Cost: Ranges from $645 to $2,395

Organizers say that this conference, “designed by testers for testers,” is focused on testing management and strategy, to let attendees improve their techniques, get up to speed on the latest tools, discuss trends, improve processes, and better understand the testing industry.

Who should attend? QA and testing professionals

Did we miss any conferences or events?

Please let us know in the comments below if there are any other events or conferences you think we should add to our list.

Best Practices Guide: Migrating Applications to the Cloud

Image credit: Flickr

 

DevOps and CD in the crosshairs: A new approach for security in 2017Open in a New Window

As we settle into 2017, there’s plenty of uncertainty surrounding the security and privacy of our digital world. Much of this uncertainty stems from the escalating intensity of cyberattacks against consumers and businesses, the evolution of the Internet of Things (IoT) as a weaponized battlefield, and uncertainty as to what impact of the incoming administration will have regarding the government’s position on privacy.

But the shift by attackers from systems to the applications is the bigger trend that should worry software professionals. This threat requires a different approach to security. Here's why.

What is the true state of security in DevOps?

Hackers zero in on DevOps and CD

Nasty people who want to do ugly things constantly seek out high-value targets that give them the most leverage over victims, with the least amount effort. There’s even a well-known term in certain circles, known as “compromise impact efficiency.”

Continuous delivery / continuous integration (CD/CI) pipelines that are now widely adopted at companies practicing agile development and DevOps are now a huge target. Consider the impact of advanced persistent threat (APT) malware, but applied at the application level, instead of the system level. If threat actors can breach the software development pipeline, they can control your company by subverting its software code and components.

Healthcare and financial services organizations have some of the most valued data, and so are likely to be attacked first. These attacks will be aggressive and very public, so DevOps teams will need to live up to new standards of testing and prevention—preferably harmonizing these operations with existing DevOps tools and functions.

DevOps teams become more critical security players

As distributed computing and TCP/IP took hold in the early 1990’s, the information security world revolved around resource access control facility (RACF) and TopSecret—mainframe access management. Distributed computing and network security had never been issues before, so there were no skilled security practitioners to get the job done.

The result: Network security was owned by the network organization. The same thing happened when web application security became a demand: Web developers were responsible for implementing security controls (e.g. web access management) even though the central information security organization was providing guidance and standards.

Just as network security ownership defaulted to network teams in the 1990s, the same will be true for agile security and DevOps teams in 2017. Cloud and agile technologies are being adopted faster than ever, and the industry doesn’t have time to wait for information security to develop the needed skills. Therefore, DevOps teams will be on the hook for implementing actual security controls.

The successful security team will recognize this, and seek to provide tools that harmonize with this trend, instead of fighting it. In so doing, these teams will maintain high degrees of visibility and create leverage for their already-stressed resources.

With new threats comes opportunity

Software professionals have said for over a decade that security should be built in, not bolted on. Here’s a prime opportunity to move towards that reality. How will your team or operations make it happen in 2017?

What is the true state of security in DevOps?

Image credit: Flickr

 

7 DevOps trends to watch in 2017Open in a New Window

As IT organizations continue to implement or modernize their DevOps practices, it’s important not to get left behind. IT operations management and development groups need to paget a sense of where technologies are headed so they are ready to adapt when the time comes—or avoid the hype in some cases.

To help you understand the challenges and opportunities likely to arrive in 2017, TechBeacon spoke with experts in DevOps, cloud, microservices, containers, and emerging ecosystems such as serverless computing. Here’s what they had to say.

What is the true state of security in DevOps?

A clear definition of DevOps will finally emerge

IT companies have been struggling with DevOps transformations for years. J. Paul Reed, managing partner at Release Engineering Approaches, thinks that at least one struggle will end in 2017—the struggle to understand the exact definition of DevOps.

“2017 will be the year that DevOps is finally declared '1.0-stable'." —J. Paul Reed

It's no longer an emergent phenomenon because there's been a lot of work to define and codify DevOps as a static set of principles and practices.”

Reed discussed the dilemma of DevOps divergence in his argument that DevOps is disintegrating. While that divergence will continue, the generally accepted definition of DevOps will be promoted, while others become increasingly minimized. “As for what this ultimately means for DevOps, ask me in 2018,” he says. "We'll start to see this mechanism happen in 2017.” 

Jeremy Likness, a blogger and director of application development at managed services provider iVision, has an idea of what that final definition will look like: the new application lifecycle management (ALM) methodology.

Many organizations will challenge agile and recognize DevOps as the new ALM methodology that is a generation beyond agile, rather than a superset. As part of this shift, we'll see Infrastructure as Code (IoC) continue to gain a foothold in continuous delivery pipelines.” —Jeremy Likness

Testers will learn to code or perish

TJ Maher, an automation developer and TechBeacon contributor, spent the last two years updating his skills to move from manual tester to automation developer to software engineer in test. In those same two years, he’s seen many of his former QA testing colleagues lose their jobs due to the major changes going on in the testing industry right now.

“Continuous integration and continuous delivery turned the big splash of Selenium WebDriver into a tsunami that washed away almost all of the software testing industry, drowning many of the manual testers and eroding their base of employability.” —TJ Maher

For many testing engineers, 2016's motto was "learn to code or perish." Testing is now focused at the web services level, with tremendous demand for RESTful APIs, and Selenium wrappers, he says.

Andy Tinkham, global practice lead for QA testing at C2 IT Solutions Consulting, believes there’s another big reason why generic QA professionals are having trouble finding jobs: the commoditization of testers.

Entire careers were built on quality analysts moving from industry to industry, applying their knowledge of good testing to systems, he contends.

“We focused on making testing repeatable enough that anyone could do it. When combined with an automation approach that tries to directly translate human activity into scripts and a development culture that is blurring the lines between roles, we ended up commoditizing the testing industry.” —Andy Tinkham

As a result, Tinkham says, more tests have been automated (although he admits this is no panacea), and other roles have taken on the tester’s duties. And in some cases, testing has been transferred to offshore teams.

Tinkham and Maher both say 2017 will be a pivotal year for testers, with most jkobs requiring a higher degree of specialization than ever before. “Whether it’s a focus on data warehousing and ETL, a focus on automation, or a focus on some other aspect of testing that has previously been considered just one skill,” Tinkham says.

"Back to basics" agile movements will gain steam

15 years after the creation of the Agile Manifesto, agile and scrum are considered best practices by many, but others there have become jaded by the unintended side effects of dogmatic agile methodologies. “We’ve shifted away from those core values that made the Agile Manifesto so revolutionary in the first place,” says Tinkham, and this will be the year that “back to basics” agile movements gain steam.

Tinkham believes that two movements have the best chance to gain mainstream attention around the industry. One of them is Joshua Kerievsky’s Modern Agile, which was introduced in the Agile 2016 keynotes. The other is Heart of Agile, by Agile Manifesto signatory, Alistair Cockburn.

These philosophies have already been presented at agile conferences and are reinvigorating many who have previously become jaded. "Expect to see them grow in influence and begin to impact mainstream agile thinking over the next 12-18 months,” Tinkham says.

More organizations will move to cloud, but pass on PaaS

David Linthicum, senior vice president at Cloud Technology Partners and regular TechBeacon contributor, says 2017 will be the year that most organizations move to the cloud in a big way.

“If they moved 20 applications in 2016, then in 2017 they will move 500.” —David Linthicum

The way that companies use the cloud will also start to change this year. “Platform-as-a-Service (PaaS) will start to die a slow death... because it tightly couples the solution to the cloud vendor,” says Likness. Companies will instead favor container-based solutions that give them flexibility and portability in a hybrid environment that may contain services from multiple cloud vendors.

Expect to see more Linux commodity servers and organizations leveraging .NET Core to move away from dependency on Windows-based machines, he adds. "More vendors will provide not only software-as-a-service (SaaS), but containers-as-a-service, so customers running on-premise have easy access to an up and running, out-of-the-box solution.”

Microservices hype will begin to cool

The enthusiasm surrounding microservices was at an all-time high in 2016. While microservices are a great advancement for many applications, they’re not a magic bullet, says Marco Troisi, senior software engineer at Bluefin Payment Systems. Both he and Paul Bakker, software architect at Netflix, think that the over-hyping of microservices will die down in this year.

“There are going to be more people talking about when it's not a good idea to build software with a microservices architecture. At the same time, tools that help us manage a distributed architecture are going to reach a higher level of maturity, making it easier than ever before to work with microservices.” —Marco Troisi

Bakker says that many enterprises treat microservices as a synonym for modern, lightweight frameworks. “Of course these lightweight alternatives are a great way forward. But that doesn’t necessarily mean you need [a] distributed architecture as well,” he says. “For those who do not understand the distinction between architecture and tools, microservices will become the new service-oriented architecture (SOA) in 2017 and those firms will likely invest a lot of money in commercial tools they don’t actually need.”

Christian Posta, a principal solutions architect in Red Hat’s middleware specialist group, expects that many developers will make mistakes this year with regard to microservices.

“Enterprises will begin realizing that Java's not ideal for microservices developments; yet many of them will continue to invest in that direction." —Christian Posta

Posta said he believes that major Internet-based businesses such as Netflix, Twitter, and others, will rethink their open source strategies. Many of them, he says, are just “slinging code over the wall.”

Those changes will be important, he says, because those open source tools and libraries from major tech firms will start displacing traditional vendor middleware. “Some new interesting startups will spring up around this emerging ecosystem,” he says.

Containers and orchestration tools will become easier to use

A huge amount of investment from major cloud providers is going into containers. Container cluster management is a key area where providers are building solutions, says Likness.

Troisi expects toolmakers to focus on making Docker and other containers easier to use this year, and he's particularly excited about Docker Compose, which should be production-ready in 2017.

“Storing Docker commands in Compose's easy-to-read YAML files is going to become the preferred way for developers to run Docker apps, as opposed to having to remember huge, unreadable command-line interface commands,” says Troisi.

Linthicum expects general interest in containers to grow this year, but only around greenfield applications.

“It will be too hard and costly to containerize most older applications." —Jeremy Likness

Likness says containers became part of many development workflows last year. This year, he says, they will become just as prominent in production workflows.

Kubernetes will be the primary container orchestration engine

Kubernetes will be the de facto industry standard for container orchestration in 2017, Troisi predicts, and research seems to support this claim. But Kubernetes is still relatively difficult to set up and use, so container-based PaaS systems, such as RedHat's OpenShift and CoreOS Tectonic, will help ease IT organizations into the world of Kubernetes and container orchestration.

Bakker agrees with Likness’ “slow death” prediction for PaaS, but only for older, strict-pattern lock-in offerings, such as Google AppEngine. He expects PaaS offerings based on Kubernetes, such as Google Container Engine and OpenShift, to thrive in 2017. Posta expects the same: “Kubernetes is eating containers, and will continue to roll on; PaaS providers built on Kubernetes will gain more traction than other ones.”

The race between cloud providers is not about virtual machines anymore, says Bakker. Instead, the race will focus on container platforms, and making it as easy as possible to run containers in the cloud.

"2017 will be the year where hosted container platforms become what IaaS was a few years ago.” —Paul Bakker

The hype level for serverless will rival that of microservices and containers

Serverless (also known as Functions-as-a-Service or “FaaS”) is one of the newest trends in IT, and has massive potential to fundamentally change how some organizations develop software. Bernard Golden, CEO at Navica, author of "Amazon Web Services for Dummies,” and regular TechBeacon contributor, expects increased awareness and more early adoption of serverless technologies.

“Serverless holds the potential for IT organizations to get out of the infrastructure management business completely, and focus on  application development and deployment. While IT has always been a domain of constant change, next year will offer more opportunity and challenge to IT organizations than they have ever seen before.” —Bernard Golden

Dean Hallman,  chief technology officer and principal data systems consultant at software consultancy Cloudbox Systems, has been researching serverless frameworks since the first ones arrived in the open source space. He sees serverless frameworks creating more wizard-like experiences and filling in the feature gaps between major FaaS providers (such as AWS Lambda, Azure Functions, and IBM OpenWisk) so that serverless apps can target any of these vendors' services from a single codebase.

Hallman also expects serverless to have a major impact on the evolution of DevOps. Developers will be more involved than ever in domains that were previously managed by Ops and DevOps. “Most serverless frameworks already propose a serverless-friendly DevOps workflow,” Hallman says. “AWS Lambda also extended its platform to promote ‘blast-radius’ containment to a core feature via AWS sub-accounts and organizations.”

That's why Hallman believes the groundwork has already been laid for 2017.

“[There will be] a merging of the aligning trends of serverless frameworks, AWS SAM, and AWS sub-accounts in a manner that satisfies both the access needs of developers and the security requirements of DevOps.” —Dean Hallman

Hallman expects microservices and container-based cloud infrastructure to merge with serverless in 2017, rather than being positioned as a competing approach. One example of this trend is teh emergence of products like IronFunctions, from Iron.io, which have appeared recently as a "Lambda-anywhere" solution, he says.

Action items: How to move forward

Now that you have read what experts predict in DevOps this year, how should you respond? Here are a few key takeaways that should help you get your business prepared for the coming year.

  • Focus on infrastructure as code in your DevOps transformations. It’s now a core component.
  • Testers can still be generalists, but they need an area of specialization. They also need to know the fundamentals of programming, and have the ability to build their own applications.
  • Reassess any agile processes employed by your teams, and consider going back to basics using the principles outlined in Modern Agile or Heart of Agile.
  • IT operations should consider moving away from the old-style, vendor-locked PaaS services. Start exploring newer container-based and Kubernetes-based PaaS offerings to provide flexibility and portability in a hybrid environment that may contain services from multiple cloud vendors.
  • Once you have researched all the reasons why you should migrate to microservices, find articles about why you shouldn’t migrate to microservices, and get a more balanced perspective to temper the hype. 
  • If you have only used containers in development, it's time to start experimenting with ways to use them in production.
  • Development and operations management should start boning up on serverless architectures, and begin experimenting with it.

 

That's what the experts say. What are your predictions and retrospectives for development and operations? Let us know your opinion on these predictions and your own expectations in the comments section below.

What is the true state of security in DevOps?

 

 

The legacy developer's guide to Java 9Open in a New Window

Every few years, when a new version of Java is released, the speakers at JavaOne tout the new language constructs and APIs, and laud the benefits. Meanwhile, excited developers line up, eager to use the new features. It’s a rosy picture—except for the fact that most developers are charged with maintaining and enhancing existing applications, not creating new ones from scratch.

Most applications, particularly commercially sold ones, need to be backward-compatible with earlier versions of Java, which won’t support those new, whiz-bang features. And, finally, most customers and end users, particularly those in enterprises, are cautious about adopting the newly announced Java platform, preferring to wait until they’re confident that the new platform is solid.

This leads to problems when developers want to use a new feature. Do you like the idea of using default interface methods in your code? You’re out of luck if your application needs to run on Java 7 or earlier. Want to use the java.util.concurrent.ThreadLocalRandom class to generate pseudo-random numbers in a multi-threaded application? That's a no-go if your application needs to run on Java 6, 7, 8 or 9.

When new releases come out, legacy developers feel like kids with their noses pressed up against the window of the candy store: They’re not allowed in, and that can be disappointing and frustrating.

So is there anything in the upcoming Java 9 release that’s aimed at developers working on legacy Java applications? Is there anything that makes your life easier, while at the same time allowing you to use the exciting new features that are coming out next year? Fortunately, the answer is yes.

Mobile Analytics Playbook: A practical guide

What legacy programmers could do before Java 9

You can shoehorn new platform features into legacy applications that need to be backward-compatible. Specifically, there are ways for you to take advantage of new APIs. It can get a little ugly, however.

You can use late binding to attempt to access a new API when your application also needs to run on older versions of Java that don’t support that API. For example, let’s say that you want to use the java.util.stream.LongStream class introduced in Java 8, and you want to use LongStream’s anyMatch(LongPredicate) method, but the application has to run on Java 7. You could create a helper class as follows:

public classLongStreamHelper {
     private static Class longStreamClass;
     private static Class longPredicateClass;
     private static Method anyMatchMethod;

     static {
          try {
               longStreamClass = Class.forName("java.util.stream.LongStream");
               longPredicateClass = Class.forName("java.util.function.LongPredicate");
               anyMatchMethod = longStreamClass.getMethod("anyMatch", longPredicateClass):
          } catch (ClassNotFoundException e) {
               longStreamClass = null;
               longPredicateClass = null;
               anyMatchMethod = null
          } catch (NoSuchMethodException e) {
               longStreamClass = null;
               longPredicateClass = null;
               anyMatchMethod = null;
          }

          public static boolean anyMatch(Object theLongStream, Object thePredicate)
               throws NotImplementedException {
               if (longStreamClass == null) throw new NotImplementedException();

               try {
                    Boolean result
                         = (Boolean) anyMatchMethod.invoke(theLongStream, thePredicate);
                    return result.booleanValue();
               } catch (Throwable e) { // lots of potential exceptions to handle. Let’s simplify.
                     throw new NotImplementedException();
               }
          }
     }

There are ways to make this a little simpler, or more general, or more efficient, but you get the idea.

Instead of calling theLongStream.anyMatch(thePredicate), as you would in Java 8, you can call LongStreamHelper.anyMatch(theLongStream, thePredicate) in any version of Java. If you’re running on Java 8, it’ll work, but if you’re running on Java 7, it’ll throw a NotImplementedException.

Why is this ugly? Well, it can get extremely complicated and tedious when there are lots of APIs you want to access. (In fact, it’s tedious already, with a single API.) It’s also not type safe, since you can’t actually mention LongStream or LongPredicate in your code. Finally, it’s much less efficient, because of the overhead of the reflection, and the extra try-catch blocks. So, while you can do this, it’s not much fun, and it’s error-prone if you're not careful.

While you can access new APIs and still have your code remain backward-compatible, you can’t do this for new language constructs. For example, let’s say that you want to use lambdas in code that also needs to run in Java 7. You’re out of luck. The Java compiler will not let you specify a source compliance level later than its target compliance level. So, if we set a source compliance level of 1.8 (i.e., Java 8), and a target compliance level of 1.7 (Java 7), it will not let you proceed.

Multi-release JAR files to the rescue

Until recently, there hasn’t been a good way to use the latest Java features while still allowing the application to run on earlier versions of Java that didn't support the applications. Java 9 provides a way to do this for both new APIs and for new Java language constructs: It's called multi-release JAR files.

Multi-release JAR files look just like old-fashioned JAR files, with one crucial addition: There’s a new nook in the JAR file where you can put classes that use the latest Java 9 features. If you’re running Java 9, the JVM recognizes this nook, uses the classes in that nook, and ignores any classes of the same name in the regular part of the JAR file.

If you’re running Java 8 or earlier, however, the JVM doesn’t know about this special nook. It will ignore it, and only run the classes in the regular part of the JAR file. When Java 10 comes out, it will offer another nook specifically for classes using new Java 10 features, and so forth.

JEP 238, the Java enhancement proposal that specifies multi-release JAR files, gives a simple example. Consider a JAR file containing four classes that will work in Java 8 or earlier:

JAR root

      - A.class
      - B.class
      - C.class
      - D.class

Let’s say that Java 9 comes out, and you rewrite classes A and B to use some new Java 9-specific features. Later, Java 10 comes out and you rewrite class A again to use Java 10’s new features. At the same time, the application should still work with Java 8. The new multi-release JAR file looks like this:

JAR root
      - A.class

      - B.class
      - C.class
      - D.class
      - 
META-INF

           Versions

                - 9

                   - A.class
                   - B.class

               - 10
                   - A.class
In addition to the new structure, the JAR file’s manifest contains an indication that this is a multi-release JAR.

When you run this JAR file on a Java 8 JVM, it ignores the \META-INF\Versions section, since it doesn’t know anything about it and isn’t looking for it. Only the original classes A, B, C and D are used.

When you run it using Java 9, the classes under \META-INF\Versions\9 are used, and are used instead of the original classes A and B, but the classes in \META-INF\Versions\10 are ignored.

When you run it using Java 10, both \META-INF\Versions branches are used; specifically, the Java 10 version of A,  the Java 9 version of B, and the default versions of C and D are used.

So, if you want to use the new Java 9 ProcessBuilder API in your application while still allowing your application to run under Java 8, just put the new versions of your classes that use ProcessBuilder in the \META-INF\Versions\9 section of the JAR file, while leaving old versions of the classes that don’t use ProcessBuilder in the default section of the JAR file. It’s a straightforward way to use the new features of Java 9 while maintaining backward compatibility.

The Java 9 JDK contains a version of the jar.exe tool that supports creating multi-release JAR files. Other non-JDK tools also provide support.

Java 9: Modules everywhere

The Java 9 module system (also known as Project Jigsaw), is undoubtedly the biggest change to Java 9. One goal of modularization is to strengthen Java’s encapsulation mechanism so that the developer can specify which APIs are exposed to other components, and can count on the JVM to enforce the encapsulation. Modularization’s encapsulation is stronger than that provided by the public/protected/private access modifiers of classes and class members.

The second goal of modularization is to specify which modules are required by which other modules, and to ensure that all necessary modules are present before the application executes. In this sense, modules are stronger than the traditional classpath mechanism, since classpaths are not checked ahead of time, and errors due to missing classes only occur when the classes are actually needed. That means that an incorrect classpath might be discovered only after an application has been run for a long time, or after it has run many times.

The entire module system is large and complex, and a complete discussion is beyond the scope of this article. (Here's a good, in-depth explanation.) Rather, I'll concentrate on aspects of modularization that support legacy application developers.

Modularization is a good thing, and developers should try to modularize their new code wherever possible, even if the rest of the legacy application is not (yet) modularized. Fortunately, the modularization specification makes this easy.

First, a JAR file becomes modularized (and becomes a module) when it contains a file module-info.class (compiled from module-info.java) at the JAR file root. module-info.java contains metadata specifying the name of the module, which packages are exported (i.e., made visible to the outside), which modules the current module requires, and some other information.

The information in module-info.class is only visible when the JVM is looking for it, which means that modularized JAR files are treated like ordinary JAR files when running on older versions of Java (assuming the code has been compiled to target an earlier version of Java. Strictly speaking, you’d need to cheat a little and still target module-info.class to Java 9, but that’s doable).

That means that you should still be able to run your modularized JAR files on Java 8 and earlier, assuming that they’re otherwise compatible with that earlier version of Java. Also note that module-info.class files can be placed, with restrictions, in the versioned areas of multi-release JAR files.

In Java 9, there is both a classpath and a module path. The classpath works like it always has. If a modularized JAR file is placed in the classpath, it’s treated just like any other JAR file. This means that if you’ve modularized a JAR file, but are not ready to have your application treat it as a module, you can put it in the classpath, and it will work as it always has. Your legacy code should be able to handle it just fine.

Also, note that the collection of all JAR files in the classpath are considered to be part of a single unnamed module. The unnamed module is considered a regular module, but it exports everything to other modules, and it can access all other modules. This means that, if you have a Java application that’s modularized, but have some old libraries that haven’t been modularized yet (and perhaps never will be), you can just put those libraries in the classpath and everything will just work.

Java 9 contains a module path that works alongside the classpath. Using the modules in the module path, the JVM can check, both at compile time and at run time, that all necessary modules are present, and can report an error if any are missing. All JAR files in the classpath, as members of the unnamed module, are accessible to the modules in the module path and vice versa.

It’s easy to migrate a JAR file from the classpath to the module path, to get the advantages of modularization. First, you can add a module-info.class file to the JAR file, then move the modularized JAR file to the module path. The newly minted module can still access all the classpath JAR files that have been left behind, because they’re part of the unnamed module, and everything is accessible.

It’s also possible that you might not want to modularize a JAR file, or that the JAR file belongs to someone else, so you can’t modularize it yourself. In that case, you can still put the JAR file into the module path; it becomes an automatic module.

An automatic module is considered a module even though it doesn’t have a module-info.class file. The module’s name is the same as the name of the JAR file containing it, and can be explicitly required by other modules. It automatically exports all of its publicly accessible APIs, and reads (that is, requires) every other named module, as well as the unnamed modules.

This means that it’s possible to make an unmodularized classpath JAR file into a module with no work at all: Legacy JAR files become modules automatically, albeit without some of the information needed to determine whether all required modules are really there, or to determine what is missing.

Not every unmodularized JAR file can be moved to the module path and made an automatic module. There is a rule that a package can only be part of one named module. So if a package is in more than one JAR file, then only one of the JAR files containing that package can be made into an automatic module—the other can be left in the classpath and remain part of the unnamed module.

The mechanism I've described sounds complicated, but it’s really quite simple. All it really means is that you can leave your old JAR files in the classpath or you can move them to the module path. You can modularize them or you can leave them unmodularized. And once your old JAR files are modularized, you can leave them in the classpath or put them in the module path.

In most cases, everything should just work as before. Your legacy JAR files should be at home in the new module system. The more you modularize, the more dependency information can be checked, and missing modules and APIs will be detected far earlier in the development cycle, possibly saving you a lot of work.

DIY Java 9: The modular JDK and Jlink

One problem with legacy Java applications is that the end user might not be using the right Java environment. One way to guarantee that the Java application will run is to supply the Java environment with the application. Java allows the creation of a private, or redistributable, JRE, which may be distributed with the application. The JDK/JRE installation comes with instructions on how to create a private JRE. Typically, you take the JRE file hierarchy that’s installed with the JDK, keep the required files, and retain the optional files with the functionality that your application will need.

The process is a bit of a hassle: You need to maintain the installation file hierarchy, you must be careful that you don’t leave out any files and directories that you might need, and, while it does no harm to do so, you don’t want to leave in anything that you don’t need, since it will take up unnecessary space. That's an easy mistake to make.

So why not let the JDK do the job for you?

With Java 9, it’s now possible to create a self-contained environment with your application, and anything it needs to run. There's no need to worry that the wrong Java environment is on the user’s machine, and no need to worry that you’ve created the private JRE incorrectly.

The key to creating these self-contained run-time images is the module system. Not only can you modularize your own code, but the Java 9 JDK is itself now modularized. The Java class library is now a collection of modules, as are the tools of the JDK itself. The module system requires that you specify the base class modules that your code requires, and that in turn will specify the parts of the JDK that you need.

To put it all together, you'll use a new Java 9 tool called jlink. When you run jlink, you’ll get a file hierarchy with exactly what you need to run your application no more and no less. It will be much smaller than the standard JRE, and it’s platform-specific (that is, specific to an operating system and machine), so if you want to create these runtime images for different platforms, you’ll need to run jlink in the context of installations on each platform for which you want an image.

Also note that if you run jlink on an application in which nothing has been modularized, there won’t be enough information to narrow down the JRE, so jlink will have no choice but to package the whole JRE. Even there, you’ll get the convenience of having jlink package the JRE itself, so you don’t need to worry about correctly copying the required file hierarchy.

With jlink, it becomes easy to package up your application, and everything it needs to run, without worrying about getting it wrong, and only packaging that part of the runtime that’s necessary to run your application. This way, your legacy Java application has an environment on which it’s guaranteed to run.

When old meets new

One problem with having to maintain a legacy Java application is that you’re shut out of all the fun when a new version of Java comes along. Java 9, like its predecessors, has a bunch of great new APIs and language features, but developers, remembering past experiences, might assume that there’s no way to use those new features without breaking compatibility with earlier versions of Java.

Java 9’s designers, to their credit, seem to have been aware of this, and they’ve worked hard to make those new features accessible to developers who have to worry about supporting older versions of Java.

Multi-release JAR files allow developers to work with new Java 9 features, and segregate them in a part of the JAR file where earlier Java versions won’t see them. This makes it easy for developers to write code for Java 9, leave the old code for Java 8 and earlier, and allow the runtime to choose the classes it can run.

Java modules let developers get better dependency checking by writing any new JAR files in a modular style, all the while leaving old code unmodularized. The system is remarkably tolerant, is designed for gradual migration, and will almost always work with legacy code that knows nothing about the module system.

The modular JDK and jlink let users easily create self-contained runtime images so that an application is guaranteed to come with the Java runtime that it needs to run, and everything that it needs is guaranteed to be there. Previously, this was an error-prone process, but in Java 9 the tools are there to make it just work.

Unlike earlier Java releases, the new features of Java 9 are ready for you to use, even if you have an older Java application and need to ensure that customers can run your application—regardless of whether or not they’re as eager as you are to move up to the newest Java version.

Mobile Analytics Playbook: A practical guide

Image credit: Flickr

 

Why agile teams need to share the product owner roleOpen in a New Window

Within many agile teams, the PO (product owner) is solely responsible for the definition, interpretation, and prioritization of requirements. While these areas are clearly ruled by the PO, making them the unique province of the PO is fundamentally at odds with healthy agile team practices such as cross-functional teaming and collaborative swarming.

Building a special box around the PO into which others should not venture is both unnecessary and unhealthy. Those attempts seem like wish fulfillment for technophiles, since this separation of duties might insulate the development team from many of the people-centered duties of software development, which are notoriously challenging.

Instead of using the PO as a boundary and buffer, agile teams benefit from sharing elements of the PO role with the designated PO, who should retain final say over what work needs to be done and whether or not it has been successful. Here's why.

Continuous testing: A practical guide

The PO is the chief steward of business value

On an agile team, the PO is principally responsible for representing business needs and ensuring that the team delivers business value. The PO develops the product backlog and prioritizes backlog items. In planning sessions, the PO ensures that the most important items are worked first, and helps to define acceptance criteria for those items.

Once the team is done working on the items, the PO verifies that the acceptance criteria have been met. Overall, the PO ensures that the team is working on the right features and producing valuable outcomes.

The PO is not a black box 

On many teams, however, the PO role as implemented in an exaggerated fashion. Since the PO is responsible for requirements and prioritization, the team treats the PO as a living specification document. The requirements are whatever the PO says that they are, and any questions about the requirements are resolved by asking the PO.

If there are confusions that need to be sorted out or priorities that need to be de-conflicted, those activities are thought to be squarely in the the PO's domain, and the team waits while the PO goes to figure things out. The messiest and most difficult aspect of software development—nailing down the requirements—is left in the hands of a single individual. The team treats that individual as an abstraction layer for a variety of requirements management activities, such as education, research, strategic planning, and stakeholder negotiation.

Even though the team likes to emphasize cross-functional skill sets and swarming on technical tasks, when it comes to requirements—which are the foundation of their work—they are content to push down the toaster handle and wait for it to pop up when ready.

The isolated PO role is wish fulfillment

The tendency for agile teams to share all roles except product ownership likely derives in part from the interests and proclivities of the development team. By which I mean: The great fantasy of many software developers is to deal with code instead of people. While both present interesting challenges, machines are much more tractable than their owners.

When developers can't deal with technology alone, they might like to deal with other technical people to whom they can easily relate. Failing that, they would prefer to deal with a limited number of non-technical people who are well known to them. The PO role, as some teams define it, sounds suspiciously like wish fulfillment related to this fantasy. It gives technical people license to hide the messy, human-centric work they often don't, like behind a particular well-known person with whom they are comfortable. And it allows them to instead focus on the kinds of work they prefer to do.

In some contexts, this is reasonable. However, since the PO role is almost always a proxy for many other people's requirements, it is a fallible (albeit useful) layer of abstraction. The PO can be wrong, and sometimes the task of requirements engineering is too large for a single mind to handle.

Teams need to learn to get inside "the PO box" and help the PO sort things out, while recognizing that this leader role has final authority over features and priorities.

Sharing the PO role

Development teams can help carry the burden of the PO in several ways, including:

Knowing the business

Development teams should understand the business considerations driving the software, and know why certain features are valuable. This will help them to ask better questions and make appropriate implementation decisions for low-level details that aren't covered by the stated requirements.

Knowing the stakeholders

Development teams should know who the application stakeholders are and how to get information from them. Routing every question of fact or intent through the PO will create an artificial and pointless bottleneck, particularly in cases where the PO functions as a proxy for others. Teams must learn how to get behind the PO when additional details or clarifications are needed. At the same time, they must keep the PO in the loop on any such efforts, so that the PO retains the ability to make informed and comprehensive judgments.

Challenging requirements

The PO is fallible, as are the stakeholders the PO represents. Requirements will sometimes be wrongly prioritized, missed, and misconstrued. The development team must use their business and technical knowledge to challenge requirements that seem misguided, supply new requirements for consideration, and argue for compromises motivated by implementation concerns.

However, the development team must recognize that the PO has final say in team matters, or else the team will be missing adequate direction.

Understanding project economics

The development team should understand the basis for prioritizing certain work. They must grasp ROI, payback periods, rates of return, and the time value of money. They should also understand the balance between time to market and technical debt. This knowledge will inform their engineering practices and drive them to decompose epics and user stories in ways that deliver the highest value elements first.   

Think inside the "box"

By venturing inside of the PO box and sharing some of the traditional PO duties, an agile team will become more informed, efficient, and fault-tolerant. The considerations that drive them to share technical tasks are equally applicable to the human-centric side of their work.  

Continuous testing: A practical guide

Image credit: Flickr

 

State of the hybrid enterprise: What's next for Dev and Ops?Open in a New Window

Just 10 years ago, IT’s main concerns were to keep the systems in the data center running, and deal with the two-year backlog of applications that needed to be built.  Today, those concerns are pretty much the same.  But IT now has an opportunity to leverage public and private clouds, which will help it approach the efficiency of DevOps.  The question is: Should enterprises take advantage of these new approaches and technologies?

The answer is not simple, but at least today it’s answerable.  The hybrid enterprise is here.  You can now balance on-premises systems, such as traditional systems and private cloud, with the exploding use of public clouds.  This transformation changed IT forever, and presented some new opportunities for most enterprises.

Here's a look at the state of today's hybrid enterprise and what's next for Dev and Ops

Continuous testing: A practical guide

Common patterns in the hybrid enterprise

There are a few common patterns emerging. They include:

The partial automation of development

The partial automation of development is part of moving to DevOps processes and a DevOps organization.  This means that the automation of development activities has occurred.  However, there are missing pieces.

For instance, while many organizations have automated some testing, such as unit and regression testing, they have not automated performance and penetration testing, which are still done manually.  The need for humans to intervene means that the process is less efficient, and the developers are not able to quickly respond to the needs of the business.

The under use of public clouds

While you would think that most enterprises are moving quickly to public clouds based upon the interest in the major providers, only about 5 to 8 percent of workloads have actually made it to public IaaS clouds, which are platform analogs for most data centers.

The reasons for this slow progress are that there is yet to be an automated way to move workloads, and many applications need to be refactored (redesigned and rebuilt) to take advantage of the native features of the public clouds.  Thus, enterprises find themselves unable to take advantage of the scalability and flexibility of the public cloud because they are not yet scalable and flexible when it comes to application and data migration to the cloud.  The irony of this situation is not lost on IT leadership.

The lagging integration of development (Dev) and operations (Ops)

While DevOps is really about automation and coordination between development and operations, this aspect of this emerging trend does not seem to be coming true as fast as it should.

The reasons are many, but the people issues lead the way.  Development and operations have been islands inside of IT, operating independently and sometimes not getting along.  Thus, you’re bound to get resistance if you try to push them together without a solid plan and set of incentives that will drive success.

Systemic to all of these patterns is that moving to new modes of IT—whether the cloud or DevOps—requires a great deal of change within organizations, including the changing of hearts and minds.  Most practitioners and IT leads have come around to the benefits of these emerging technologies; some staffers have yet to buy-in.  Moreover, and perhaps more importantly, hybrid enterprises may not have the budgets to affect change at the rate desired.

Moving forward with the hybrid enterprise

So what’s next for Dev and Ops in the hybrid enterprise?  The focus will be around solving the issues raised above, and leveraging the hybrid approach to better meet the needs of the business.  Indeed, we’re seeing a few major trends already.

The pragmatic hybrid cloud

The growth of the “pragmatic hybrid cloud” is the first trend.  In many cases, enterprises opt to not take advantage of private cloud platforms and they leave workloads on traditional systems that exist on-premises, and they make those systems work and play with systems that are public cloud-based.

The use of this hybrid cloud approach means that we’ll have public clouds as a platform option to reduce the hardware and software footprint within the enterprises.  However, the initial vision of “hybrid” included workloads that are drag-and-drop portable from public to private clouds.  This approach means that vision won’t be a reality.

The fact is, workloads placed on a public cloud are not likely to move from that public cloud.  This considers the cost of migrating to, as well as off, the public cloud.  The workloads typically go through some major renovations to make them run efficiently on cloud and non-cloud platforms, and no one is anxious to throw away that investment.

Improved costs and agility

There are many advantages that the hybrid enterprise will see.  The ability to optimize cost efficiency is one of the biggest.  Having the public cloud option means that there is no longer a need to tie capital expense to all application workloads.  Application development no longer needs to include hardware and software.  Instead, the ability to instantly provision the resources that are needed provides a huge agility advantage.

In order to realize this advantage, workloads need to go through the on-premises to cloud-migration process, and that is where the latency occurs.  Thus, the focus should not only be on leveraging private and public clouds, but the ability to automate the development and operations processes that will get the applications on the most efficient platforms.  That’s the problem that most hybrid enterprises need to solve. 

Finding automation

The hybrid enterprise needs to focus on the automation of both development and operations.  This automation involves three factors.

1. On-prem workloads

First, the ability to automate the redevelopment or migration of existing on-premises workloads for migration to the public cloud, and sometimes private clouds.  This means setting up a DevOps organization, along with processes, retraining, etc., that will allow the enterprises to take advantage of a migration factory to make short work of most application migrations that need to occur.

Enterprises have gotten to an application-a-day, in some instances, moving from on premises systems to the cloud.  They apply emerging DevOps concepts, as well as leverage tools, to move applications through continuous development, continuous integration, continuous testing, and continuous deployment.

2. Agile app dev

Second, the ability to automate net-new application development, application changes, and other activities that allow applications to change or appear as needed by the business.  This is the essence of DevOps.

Most organizations evolve toward the ability to set up a migration factory that can move thousands of application workloads to the cloud, and automatically deal with any refactoring that needs to take place.  They use that as a jumping off point to more formal DevOps processes and automation that becomes the platform on which they can redevelop and redeploy at the speed their business needs. Moreover, they can deploy to either cloud or on-premises platforms.

However, the actual state of things is that most hybrid enterprises have yet to set up the basic migration factory yet, which is really DevOps with training wheels.  They understand the value, and typically they have the vision in place; they know they need to evolve, but this path is a difficult and expensive one.

3. The need for cost awareness

Also well-known are the business benefits, and the fact that costs go up and benefits go down the longer it takes to get the factory in place, as well as to achieve DevOps-lite and full DevOps.  The ability to understand this concept is easy—you just have to add up the cost of inefficiencies as they affect the business.  For example, a company that is unable to automate a new factory at the speed they need will cause production to be delayed for a much as a year.  That can cost up to $0.5 mil a day, in terms of lost revenue.

The largest cost is the difference between the as-is state of development backlogs and a fully agile business, one that can respond to all market and business demands in real time.  A fully agile business can access the compute resources that we need, on-demand, and build or change the applications that exist on those platforms.  By the way, a fully agile business does not yet exist.  However, a few innovators have come close, such as Uber and Netflix, that are both heavy DevOps and cloud users.

What does progress look like?

The hybrid enterprise is one of those problems that will need a stepwise progression, and most are still at the planning stage.  What should be the motivator here is the fact that, for most businesses, this is game-changing.  It offers the ability to provide a strategic advantage based on better usage of technology.

My advice is simple.  The new hybrid enterprises need to exploit the technology that they have already adopted.  The use of public clouds and private clouds, all working and playing well with traditional systems, is currently the norm.  Workloads that should move to the clouds is now priority one.

But that’s not all that needs to be done. The ability to automate development and operations needs to occur at the same time, and sometimes this needs to pre-date the move to cloud to facilitate the migration itself.  So, which came first, DevOps or cloud? 

The answer is DevOps.

Continuous testing: A practical guide

Image credit: Flickr

 

The 3 most crucial security behaviors in DevSecOpsOpen in a New Window

What if I told you that you could change the security posture of your entire DevOps team without ever documenting a single line of a process? It's hard to imagine that's possible, but it is. Security behaviors take the place of process, and change how the developer approaches security decisions.

In part one of this series, “A primer on secure DevOps: Why DevSecOps matters,” I discussed the importance of DevOps embracing security within its structure. The next logical question is, how do you transform a DevOps team into an army of security people? The answer is by modifying security behaviors.

People are the true drivers of application security, and in the world of DevOps, people move fast. DevOps people are not allergic to process, but in my experience, DevOps is more about the build pipeline and automation than process. People believe that process slows everything down. But if you embed security change into everyone on the DevOps team using security behaviors, you'll empower everyone as a security person.

The three core security behaviors you need to instill include threat modeling, code review, and red teaming. Each behavior is highly dependent on the human beings. Tools are available to support each behavior, but the primary delivery agent is the human brain. Each behavior requires learning and practice. These are not things that a development team will do without direction.

What is the true state of security in DevOps?

Threat modeling

Security behavior: Consider the security impact of each design decision, and think like the attacker.

Desired outcome: Choose the design decision that protects the confidentiality and integrity of your customer’s data.

Metrics to measure efficacy: How many issues are you detecting and fixing prior to committing the code, and is the security light bulb turning on when the developer sees the impact of finding the weaknesses in the design.

Threat modeling is about examining a design (or even code, if code is your design) to understand where security weaknesses exist. Threat modeling pinpoints how an attacker will attack your design, and highlights the places most likely to come under attack. With a threat model, you attack your product on paper, and fix those problems early in your development process.

Many DevOps practitioners approach the design phase with agile-colored glasses. They design in terms of user stories or features, and focus on getting the feature to build and operate. Code takes the place of traditional design time activities. This is a challenge because security can be left behind when your primary focus is to get code running.

After the developer has applied threat modeling behavior and considered security for each design decision, they can embed security directly into their decisions, and move toward a more secure option every time.

How to make it a habit: Show developers how to create a threat model, and quickly move to threat modeling an active design on which they are working. Move quickly from the theoretical to the real-world feature.

Security code review

Security behavior: Detect security flaws in another person’s code.

Desired outcome: Find the errors in the code that could be exploited if they reach production.

Metrics to measure efficacy: How many security issues are you able to detect and fix prior to a build, promoting from test to production, or in a specific period of time.

A code review is a critique of another developer’s code by searching for problems. A security code review is a bit more refined. It's deeper than just looking for logic flaws. The practitioner must understand the common types of flaws (OWASP Top 10 for Web Apps or Buffer Overflows for C), how to detect them, and how to fix them. Many teams are already doing code reviews, but the developers are not knowledgeable about security, and they're unable to find security flaws.

Strong DevOps teams use their infrastructure to force code review with each check-in to the main line. I’ve heard of teams that use the built-in functionality with GitHub that only promote a change if another engineer on the team has given a "+1," indicating that they reviewed and approved the change.

Static analysis tools offer a way to scan code changes, and perform automated code review. These tools should not replace the human connection during your code review. Static analysis alone can't find all the problems in your code. Knowledgeable humans can detect logic problems that tools aren't smart enough to find. But do use static analysis tools to enable a more effective code review.

How to make it a habit: Force a security code review as a requirement of the code commit process. Require a security +1 for each check-in. Teach your developers the fundamental security lessons of their languages, and how to find those issues in code. Finally, make static analysis tools available as part of your security tool package.

Red teaming

Security behavior: Attack your code with the same ferocity the bad people will apply to it when it reaches production.

Desired outcome: Uncover flaws using active testing, fix those flaws, and push the fixes to production as fast as possible.

Metrics to measure efficacy: How many legitimate issues are found and fixed because of red teaming efforts within a set amount of time?

The idea of red teaming began within the military, as a way for a group of people to imagine alternative situations and then plan how to respond. Within the context of security and DevOps, a red team refers to the idea of having people who take on the persona of an attacker and attempt to compromise the code.

Enacting such behavior means everyone on the team is always watching for some part of the product to compromise. Some teams approach red teaming by having people spend a portion of their time doing security testing, while others can justify having a dedicated red team resource that's always be attacking the code.

The key to red team security behavior success is that nothing is ever off limits. When the code reaches production, attackers shouldn't consider anything to be out of bounds. People enacting the red teaming behavior must be given the freedom to try any type of attack, regardless of the potential outcome. As a word of caution, you can always point the red team resources to a staging version of the pipeline to protect your production instances. The point is to never say “that could not happen” or “nobody would ever attack that way”. If your team can think it up, then so can others.

As with the use of static tools in the code review, red teaming can incorporate dynamic analysis tools that scan for web application vulnerabilities as well as network and other infrastructure patches that are missing. These tools do not replace the knowledge of the human, but can find some of the easiest issues quickly.

How to make it a habit: Instill the idea that your code will be attacked, and provide the time and tools for everyone to spend some amount of time attacking the code.

Why security behavior matters

The traditional path to embracing security has historically focused on process. You list a series of steps and expect everyone to follow those steps to ensure a secure solution. The challenge with that process is that it breeds compliance, which means that someone improves security because they are forced to do so, not because they want the system to be more secure. Compliance provides some benefits, but it will never be as good as having developers change the way they think and embrace a security mindset. With compliance, people put forth the minimum amount of effort to check the box, and that results in minimal security gains.

To keep up with the pace of DevOps and mix in security, you need to approach things differently. You should leave behind the security process, and instead embrace the idea of security behaviors. If you can change security behavior, then any time your people reach a decision point, their programmed response for better security will kick in.

The idea for a set of lightweight and scalable security behaviors hit me while performing an application security assessment for a startup. The company had a mature DevOps process, and I soon realized that traditional application security practices were not going to work in its environment. A security behavior focuses on the lightest touch points, while still having an impact on security, and is the foundation of a true security culture change for a DevOps environment.

How to set the tone for security behavior

A good way to embed these behaviors within your team is to educate team members about the behavior, and then quickly move to its practical application. Encourage the activities and reward the team for completing them. The idea is to reinforce the positive behavior with the goal of evolving the security behavior into a habit.

True security culture change is reached when the behaviors begin to transform into habits. A security habit is just a security behavior that has been practiced over and over, and has become ingrained in the way the developer thinks.

I encourage you to embed these security behaviors within your DevOps process. Next time I'll conclude this series with an overview of security tools you can use to automate security in the DevOps build pipeline.

What is the true state of security in DevOps?

Image credit: Flickr

 

How page object patterns can stabilize test automationOpen in a New Window

Functional graphical user interface (GUI) test automation is hard because the web is constantly evolving to create a better user experience, and the problem is exacerbated by bad information on the web about how to correctly write functional tests.

That’s why most QA automation engineers complain about the "flaky" nature of their tests. But to improve the reliability of your automated functional tests, you first need to accept that the only thing constant in software development is change.

Once you accept that change is inevitable, you can focus on removing possible sources of change from your automation to increase test stability. Go through this exercise, and you will arrive at the page object pattern that removes most of the issues that have been making your tests unstable.

Continuous testing: A practical guide

The automation engineer's biggest complaint: Flaky tests

What is the most common problem automation engineers complain about with respect to functional test automation of the web?

I created this poll to confirm my suspicion that the number-one problem that plagues the test automation community. The results should come as no surprise: Most people complain that functional test automation with the web is flaky.

Twenty-seven percent of respondents complained about flakiness and synchronization issues. Even scarier is the fact that 53% of the automation engineers surveyed can only execute between 1 and 50 functional tests per day, with a 95% accuracy rate. I bet these numbers are inflated, and that a majority of automation engineers actually can only execute between 1 and 10 functional tests with at 95% accuracy per day.  

I'm confident in these numbers simply from experience. At my last three employers, prior to my arrival, the testing teams were able to execute 0, 15, and 10 automated functional tests per day, respectively. Sure, they had more functional tests than that, but I didn't trust them any more than I trust the Russians on a public network at Starbucks. 

Why engineers struggle with test automation stability

So why do so many automation engineers struggle with stable test automation? The reason is actually pretty straight forward, although I personally struggled with this concept for years. Then one day, while reading Robert Martin's Clean Code: A Handbook of Agile Software Craftsmanship, it hit me:  My automated functional GUI tests were WET, as in I wrote everything twice. And, says Martin, "duplication is the primary enemy of a well-designed system,"  so writing everything twice was simply feeding my poorly designed system to become a larger mess.

That's why a practice like page objects is so effective at helping to improve the stability of your automated functional GUI tests. When implemented correctly, page objects help to resolve many problems that make for a poorly designed, automated functional test.

The idea behind page objects: How they can help

The idea behind the page object pattern is straightforward, but their use alone doesn't make them a great idea. With page objects, you use a layer of abstraction between your automated functional tests and the web page to decrease sources of duplication. In other words, you create a single class for a single web page. Then you  use this class in your automated functional test to interact with the web page in the same way you would with the web page manually.

 

Using page objects: Pros and cons

Page objects enforce good object-oriented design principles, such as “don’t repeat yourself” (DRY). A good implementation of a page object helps you to remove duplication and follow the DRY principle. Since you need to interact with a web page using a class, you should encapsulate all of the duplication into methods and properties. Methods help to reuse code. Properties, usually linked to elements on a page, help you to have a single place where a locator for that element can change. 

Using page objects also allows for easy maintenance. Since your test code is now reusable and encapsulated within methods and classes, this makes maintenance easier. Therefore, if you are looking at a sample test like the one below (regarding this page), you can see that you can easily update any element identifiers or methods in a single place.

        [Test]
        public void Test5()
        {
            var complicatedPage = new ComplicatedPage(Driver);
            complicatedPage.GoTo();
            Assert.IsTrue(complicatedPage.IsAt(),
                "The complicated page did not open successfully");

            complicatedPage.CenterContent.OpenToggle();
            Assert.That(complicatedPage.CenterContent.IsToggleOpen(), Is.True);
        }

The test and its steps can remain intact. But if you want to update the implementation of the GoTo() method for example, that lives in a single place, in a single class: the ComplicatedPage.

Therefore, if you have 250 of these functional tests, a single change inside of the GoTo() method will propagate this change through all of your tests.

This creates more robust code, because your tests are easier to maintain. A single change no longer means updating 250 instances. Instead, the paradigm is now that a single update to your code will propagate that change through all the tests that interacted with that method or property.

This approach also creates more readable tests. It's easy to understand what this automated functional test is doing. If you have a basic understanding of coding, you can read a test written using the page object pattern, and understand its purpose. And if you're using the page object pattern correctly, your tests will read like live documentation. This renders the need for actual documentation useless, since your automated functional tests can tell you exactly how the application is supposed to behave.

        [Test]
        public void Test4()
        {
            var complicatedPage = new ComplicatedPage(Driver);
            complicatedPage.GoTo();
            Assert.IsTrue(complicatedPage.IsAt(),
                "The complicated page did not open successfully");

            complicatedPage.LeftSidebar.Search("selenium");
            Assert.That(complicatedPage.IsAt(), Is.False);
        }

Code your tests and page object correctly

Just the fact that you’re using page objects in your functional test automation doesn't necessarily mean that your tests will be more robust. You need to implement the page objects correctly.

I am looking for an excellent Selenium Webdriver with Java instructor to teach students on my website. After asking for a specific code sample of one individual's tests using page objects, this is what I received:

 

This makes me angry — not at the individual, but at the fact that this person was lead to believe that this is the right way to write an automated functional GUI test. This individual, who has nine years of development experience and six years of functional test automation experience under his belt, believes that this is a great test.

I have several problems with this example. First, I have zero understanding about what this functional test does. Second, this test has absolutely no reference to a single page class. Finally, based on my very shallow understanding, it seems as though all of the softValidate() methods are interacting with some kind of HTML property. So when something on the web page inevitably changes, this test, along with 100 others, will need to be updated.

This kind of abomination is all too common. I've seen this kind of code for many years, and I still see it today.

So I implore you, as a true professional, as an employee getting paid to do a great job, to learn the proper way to write a functional test using a page object. There are many great ways to write a functional test using a page object, but the code example above is not one of them. I have an entire course that teaches you how to write functional tests with page objects. I won’t say that my method is the best in the world, as I am always making improvements and learning. But I can definitely say that this test…

        [Test]
        public void Test3()
        {
            var complicatedPage = new ComplicatedPage(Driver);
            complicatedPage.GoTo();
            Assert.IsTrue(complicatedPage.IsAt(), 
                "The complicated page did not open successfully");

            complicatedPage.SocialMediaSection.ClickFirstTwitterButton();
            Assert.That(complicatedPage.IsAt(), Is.False);
        }

...is drastically more robust and easier to understand than the one I received above.

Next steps: Create your own page object pattern

Automated functional test automation with the web is definitely hard, but you can make your life much easier by focusing on the DRY principle. By attempting to remove duplications from your tests, you will naturally begin to create great tests that use the page object pattern.

And if you want to skip the learning curve and jump straight into a good implementation, there are plenty of good examples on the web. When you start applying any of these examples, your functional test automation will see a drastic improvement in it's robustness.

That's my advice for using page object patterns, but I’m always looking for ways to improve. Do you have a useful tip to share that I missed? If so, please post your comments below.

Want to learn more? See Nikolay's presentation, "Using page object pattern to drastically stabilize your automation," during the online Automation Guild conference, which runs January 9-13.

Continuous testing: A practical guide

 

DevOps study finds informal teams perform betterOpen in a New Window

A recent study of Dev and Ops professionals in large enterprises found that those with the least mature DevOps implementations were seeing the most success. While that sounds like a paradox, it's the approach the teams took that matters. A similar tack might just benefit your own DevOps efforts.

The study, commissioned by HPE's Digital Research Team, focuses on what's called process maturity, a common framework that people use to describe how an organization can progressively improve the effectiveness of its work. Typically expressed as a range of levels from one to five (mirroring the Carnegie Mellon Software Engineering Institute’s Capability Maturity Model), maturity models are used to address everything from human resources processes, to information security, to e-learning—and now DevOps. 

Here are highlights from what the survey found.

DevOps mindset more important than formality

The four phases of DevOps maturity

For the 2016 Enterprise Agile and DevOps Study, YouGov asked more than 400 Dev and Ops professionals in enterprises with 500 or more employees about their adoption of and success with DevOps. The goal of the 15-minute online survey was to learn more about what practices and activities result in the highest levels of DevOps success.

To that end, the study asked about the state of DevOps deployment in the organization, with four levels of engagement as possible responses. These included:

  • Researching / evaluating DevOps approaches
  • Piloting DevOps approaches
  • Partially implemented DevOps approaches
  • Widespread implementation of DevOps across groups.

In this study, these represented four levels of DevOps maturity, and one would expect to see DevOps results increase as maturity level rises. But the study didn’t show a clear correlation between the state of DevOps deployment and better application delivery results. It did, however, uncover insights into how you can improve your own results.

Upon deeper analysis, the study broke down respondents into four segments, based on processes and results they were achieving:

  • DevOps laggards
  • DevOps majority
  • Formal DevOps leaders
  • Informal DevOps leaders

 

Source for all infographics: 2016 Enterprise Agile and DevOps Study, HPE.

Informal DevOps leaders take the lead

Informal DevOps leaders—the 10 percent respondents who almost exclusively claimed to be researching and evaluating DevOps, and who were at the lowest level of maturity, outperformed the other segments. They are releasing code faster, and with higher quality.

Many teams in this group released code weekly or faster, and their releases appeared to cause less rework and remediation when they reached production.

The formal DevOps leaders group, which also represented 10 percent of total respondents,  operated at the highest maturity level: most said they had implemented DevOps widely. With informal leaders, by contrast, most said they were still researching and evaluating DevOps.

Informal DevOps leaders were almost exclusively practicing agile. They leverage either small-team or enterprise agile in their work, which helps give them faster feedback cycles and the ability to experiment, learn and improve.

 

 

Informal DevOps leaders appeared to outperform all the other segments for a variety of other success criteria.  They reported delivering faster, more complete, more cost effective and more secure code than their peers.

You might think that the Informal DevOps leaders group would lead in adopting specific principles and approaches, such as deploying automation, defining processes to link Dev and Ops, and sharing KPIs and dashboards. But that's not the case. In fact, Informal DevOps leaders did not prioritize those principles and approaches. But they did place a clear emphasis on communication and collaboration.  

In other words, the informal DevOps leader segment appears to place culture, sharing, and collaboration above tools and techniques, which were more important to the other three groups.

Interestingly, the informal DevOps leader segment used all of the practices that one would expect to find in teams that have fully adopted DevOps, even though they described themselves as still researching and evaluating DevOps.  For example, this group reported:

  • More shared responsibility for testing and quality
  • More automated testing
  • Trunk-based development
  • Testing centers of excellence supporting the release trains
  • A shift toward “build, run, own” model for their code / a product orientation
  • Shared tools between developers and testers
  • Containerization
  • Use of the Scaled Agile Framework (SAFe)
  • An emphasis on the importance of security

 

While the informal DevOps leaders may not claim to be doing widespread DevOps, they are delivering impressive results.  They also appear to be following many of the best practices that one would expect to find in a team that has adopted DevOps and is fully mature.

DevOps maturity does not equal success

As the experience of informal DevOps leaders shows, DevOps success may not be directly linked so much to process maturity and standardization as it is to a mindset of exploring, experimentation and continuous learning. Indeed, the study illustrates the importance of agility, communication and collaboration in achieving faster delivery and higher quality. 

The full spectrum of DevOps techniques (continuous integration, continuous delivery, continuous testing, trunk-based development, containerization, and so on) clearly apply and play a role here, but the key differentiator was the spirit of learning, experimenting, improving—and maybe even a sense of humility.  

The key to success in DevOps is to not be complacent, but to continuously learn and improve. The survey results bear this out: DevOps is not a destination; it's a journey.

DevOps mindset more important than formality

 

33 test automation leaders to follow on TwitterOpen in a New Window

Over the past two years I’ve interviewed many awesome developers and testers for my TestTalks podcast. I've also interviewed many automation engineers who will be speaking at the Automation Guild online conference in January to create this list of  automation leaders and experts you should follow on Twitter in 2017.

Continuous testing: A practical guide

Jonathon Wright

@Jonathon_Wright

Jonathon is an automation cyborg from the future: At the HPE Discover conference in London last year, he gave me a tour of the city while talking automation for 8 hours non-stop. Jonathon is a top strategic thought leader in emerging technologies, innovation and automation. He has authored several books on test automation, as well running numerous online webinars and training courses.

 

Dan Cuellar

@thedancuellar

Dan is the creator of the open-source mobile automation framework Appium, head of software testing for @FoodItFOOD, and a penguin enthusiast. 'Nuff said.

 

Dave Haeffner

@TourDeDave

 @SeleniumHQ project member Dave Haeffner is the writer of Elemental Selenium, a free, weekly Selenium tips newsletter read by hundreds of testing professionals. He’s also the creator and maintainer of ChemistryKit, an open source Selenium framework, and is the author of the Selenium Guidebook.

 

Eran Kinsbruner

@ek121268

A director and mobile tech evangelist at Perfecto, Eran Kinsbruner has a wealth of hands-on, mobile testing experience under his belt, both from a functional testing and non-functional perspective.

 

Alan Richardson

@eviltester

Alan, an independent testing consultant and trainer, is well known for his expertise in agile and automated testing,  as well as manual exploratory, technical and performance testing. He is the author of Java for Testers, Dear Evil Tester, and Selenium Simplified; and is the creator of many of online training courses.

 

Nikolay Advolodkin

@Nikolay_A00

Nikolay is a prolific automation test and quality assurance engineer who is currently a software testing instructor on his blog, UltimateQA.com. He’s also the creator and co-owner of QTPtutorial.net and a frequent contributor to the blog SimpleProgrammer.com.

 

Bas Dijkstra

@_basdijkstra

Bas helps organizations improve their testing efforts through the smart application of tools. He publishes a weekly blog on topics related to test automation and service virtualization at ontestautomation.com.

 

John Ferguson Smart

@wakaleo

An international speaker, consultant, author and trainer, John Ferguson is the author of the best-selling BDD in Action, as well as Jenkins: The Definitive Guide and Java Power Tools. He also leads development on the innovative Serenity BDD test automation library, which has been described as the "best open source Selenium WebDriver framework."

 

Scott Nimrod

@bizmonger

Fascinated with software craftsmanship, Scott Nimrod has been practicing software development since 2003. He’s a thriving entrepreneur, software consultant and blogger who focuses on native application development and test automation. He regularly contributes to his Bizmonger blog.

 

Greg Paskal

@GregPaskal

Greg is director of quality assurance for automation at Ramsey Solutions. He is the author of multiple white papers on test automation and testing, and recently published his first book, Test Automation in the Real World, which contains insights from his 30+ years of automated testing development.

 

Matt Wynne

@mattwynne

Not only is he the founder of Cucumber Ltd. and core developer on the Cucumber project, Matt Wynne is the author of one of my favorite books on BDD, The Cucumber Book: Behavior-Driven Development for Testers and Developers.

 

Seb Rose

@sebrose

After writing the internal training courses for IBM’s Quality Software Engineering department (QSE), Seb Rose went on to develop his own courses, which he runs for clients throughout Europe. He speaks regularly at international conferences on topics such as unit testing, test-driven development, behavior-driven development, and acceptance test-driven development. Seb was a contributing author to O’Reilly’s 97 Things Every Programmer Should Know,  is a popular blogger, and is a regular contributor to technical journals.

 

Wilson Mar

@wilsonmar

Wilson has been building and bringing enterprise applications to market on major platforms—from mobile to server clouds—as an architect, developer, performance tester, and manager. His website, wilsonmar.com, provides concise, in-depth advice on leading technologies, especially on LoadRunner and performance engineering.

 

Paul Merrill

@dpaulmerrill

Paul is the founder of and principal software engineer in test at Beaufort Fairmont Automated Testing Services. He co-hosts “Reflection as a Service,” a podcast about software development, automated testing and entrepreneurialism.

 

Angie Jones

@techgirl1908

As a consulting automation engineer at LexisNexis who advises several scrum teams on QA automation strategies and best practices, Angie Jones has developed automation frameworks for countless software products. A master inventor, she is known for her innovative and out-of-the-box thinking style, which has resulted in more than 20 patented inventions in the US and China.

 

Katrina Clokie 

@katrina_tester

Katrina serves a team of more than 30 testers as a testing coach in Wellington, New Zealand. She is an active contributor to the international testing community as the editor of Testing Trapeze magazine, and as a mentor with Speak Easy. She is also co-founder of her local testing Meetup, WeTest Workshops, and is an international conference speaker, as well as a regular blogger and tweeter.

 

Richard Bradshaw

@FriendlyTester

Richard is a "friendly tester" with a passion for all things testing. As a speaker and trainer, he’s a big advocate of automation—no, not the silver bullet type, but the type that really supports testing and testers. He is FriendlyBoss at @ministryoftest and the creator of @WhiteboardTest.

 

Mark Fink

@markfink

In addition to running the independent software testing consultancy, FinkLabs, Mark Fink is also the author of The Hitchhiker’s Guide to Test Automation, and the creator of many test automation and performance tools. This includes GoGrinder, which can help you and your team check the stability and performance of your code.

 

Paul Grizzaffi

@pgrizzaffi

Paul is an automation program architect who has created automation platforms and tool frameworks based on proprietary, open source and vendor-supplied tool chains in diverse product environments ranging from telecom to stock trading, e-commerce, and healthcare. He is an accomplished speaker at both local and national meetings and conferences, and serves as an advisor to software test professionals and STPCon.

 

Mark Collin

@Ardesco

Mark is a big believer in open source technology, and spends much of his time contributing to open source projects. He is the creator and maintainer of the driver-binary-downloader-Maven-plugin, which allows Maven to download ChromeDriver, OperaDriver, IE driver, and PhantomJS to machines as a part of a standard Maven build. Mark has also contributed code to the core Selenium codebase.

 

Paul Grossman

@qtpmgrossman

A well-known test automation engineer with over 15 years’ experience, Paul Grossman has designed numerous WinRunner and QTP/UFT automation frameworks. He has spoken at HP Discover, QAAM, PSQT and the Dallas User’s Group; and is one of the top contributors to Facebook’s largest automation group, Advance Test Automation.

 

Gojko Adzic

@gojkoadzic

Gojko is the author of Fifty Quick Ideas to Improve your Tests, Fifty Quick Ideas to Improve your User Stories, Impact Mapping, Specification by Example, Bridging the Communication Gap, and Test Driven .NET Development with FitNesse.

 

Alan Page

@alanpage

Alan first joined Microsoft as a member of the Windows 95 team, and has since worked on many Windows releases, early versions of Internet Explorer, Office Lync, and Xbox One. He’s also a frequent speaker at industry testing and software engineering conferences. Alan writes about testing on his Angry Weasel blog, was the lead author of How We Test Software at Microsoft, and contributed chapters to Beautiful Testing and Experiences of Test Automation. He is also one of the hosts of the AB Testing podcast.

 

Jim Evans

@jimevansmusic

He has spent more than two decades in the software industry, concentrating in software testing but software QA developer Jim Evans only began working with WebDriver and Selenium in late 2009. His journey began with the .NET bindings, and he rewrote the Internet Explorer driver in late 2010.

 

Simon Stewart

@shs96c

He invented WebDriver while working at ThoughtWorks and has been a software engineer at Facebook. But what makes Simon Stewart a test engineer to follow was his work at Google, where he led the Selenium project, building the infrastructure required to run millions of daily browser-based tests. 

 

Daniel Knott

@dnlkntt

Daniel has worked on a number of projects, developing fully automated testing frameworks for Android, iOS and web applications. He is a well-known mobile expert and a speaker at conferences in Europe and posts regularly to his Adventures in QA blog.

 

Mark Tomlinson

@mark_on_task

Mark "the performance Sherpa" Tomlinson has worked for many firms as a testing practitioner and consultant, and now offers coaching, training and consulting to help customers adopt modern performance testing and engineering strategies, practices and behaviors for better-performing technology systems. He is the co-founder and host of the popular PerfBytes podcast.

 

Rosie Sherry

@rosiesherry

Rosie is the founder of the Software Testing Club and Ministry of Testing communities, and co-founder of the testing agency Testing Ninjas. She’s also the organizer of the Ministry of Testing's popular TestBash software testing conferences.

 

Sahaswaranamam Subra

@Sahaswaranamam

Sahaswaranamam is a developer and general technologist with a passion for producing high-quality working, usable software. He has more than 10 years of experience in DevOps, quality engineering, consulting, and leading and coaching agile teams.

 

Anton Angelov

@angelovstanton

Anton, a quality assurance architect at Telerik, is passionate about automation and designing testing best practices. He is an active blogger, founder of Automate the Planet, and one of the best rated answer authors of questions about Test Automation Frameworks (WebDriver) on Stack Overflow. His interests include Selenium, Jenkins,  and CSharp.

 

Roy de Kleijn

@TheWebTester

Roy "The Web Tester" Kleijn is an independent technical test consultant with many years of experience in automated testing, with a focus on web technologies and new programming languages. He regularly speaks at conferences and provides practice-oriented Selenium training.

 

Unmesh Gundecha

@upgundecha

In his 10 years working in software development and testing, Unmesh Gundecha has architected functional test automation projects using industry standard, in-house and custom test automation frameworks, along with leading commercial and open source test automation tools. He regularly posts to his TestO'Matic blog.

 

Jason Huggins 

@hugs

Yes, being a robot artist tops his Twitter profile, but Jason Huggins is also the creator of Selenium, co-founder of Sauce Labs, inventor of the Tapsterbot, and a member of the HealthCare.gov Tech Surge.

 

Continuous testing: A practical guide

Image source: Flickr

 

How value-stream mapping delivers a better DevOps toolchainOpen in a New Window

While the DevOps movement was built on concepts such as cross-discipline collaboration and communication—bringing together developers and their goals, priorities and efforts with those of operations and other stakeholders—many of today’s tools have only begun to bring these concepts to life. While DevOps, in theory, should unite segmented disciplines and work groups, enterprise products that are supposed to facilitate that process only provide limited automation, integration and visibility.

Adding more DevOps tools to the mixture has helped in some areas, but hurt in others, as the software development lifecycle has become increasingly bulky. Organizational leaders are overwhelmed with managing these complex toolchains. Many of the enterprise DevOps managers I talk to say they struggle to get control and gain a holistic view of all the DevOps tools in the lifecycle.

Enterprises use DevOps tools to help drive continuous build, test, delivery and other functions across the lifecycle. But many IT professionals say that to implement continuous improvement and feedback holistically, they need to be able to measure across the entire lifecycle, from planning to operations.

Value mapping can help with that.

Many organizations are looking to fix problems or improve processes that have very siloed DevOps tools, with different data sources that aren't integrated. This inhibits those involved in leading software development or DevOps initiatives from getting the most value out of their DevOps tools, and enabling "smart DevOps” in their organizations. They don’t have a way to measure and improve processes across the entire software development lifecycle.

Here's how value mapping that abstracts from the DevOps toolchain can benefit your organization's software development and lifecycle management.

2016 State of DevOps Report

Integration is key

With the rise of best-of-breed tool chains, integration is a vital first step to getting the needed big-picture view of the software development lifecycle tools and processes. Existing investments and legacy assets shouldn’t impede the full utilization of newer DevOps tools, but the sheer number of tools available to help organizations succeed in DevOps can become a hindrance if those tools are not properly integrated, and if data consistency and integrity are not assured.  

Ensure visibility and traceability

Despite the advances we’ve made as an industry in improving the process for software development and deployment, managers still struggle to see everything that is happening across the board, and to connect teams, processes and tools appropriately. Tracing events and data and their associations across tools is another existing gap in what DevOps point tools have yet to offer. Correlated end-to-end visibility shortens the time-to-value of the development and delivery lifecycle greatly. Redundant processes are eliminated, successful strategies can be expanded across distributed teams, and feedback is integrated at a more rapid pace. 

Better visibility usually means better traceability, which is essential for the pipeline’s health and performance. When managers can follow each error to origin, they correct issues faster, and ensure better work quality due to the accountability required of team members. A quicker and more accurate continuous feedback loop is possible when stakeholders from a number of business units—InfoSec, operations and legal—have visibility into the entire process.

Don’t underestimate data’s importance

We’ve seen explosive buzz in the tech industry over the last few years around data—analytics, storage, processing, etc. Data plays a vital role in DevOps as well. In this arena, the challenge is leveraging the data that DevOps tools generate. A reality of the heterogeneous environments of today’s enterprise is that each tool within the DevOps chain generates its own unique events and data. Each tool likely generates reporting and tracking information as well, but without intelligent event integration and correlation, how can an enterprise make the most out of the vast amounts of data produced? 

The need for correlation of data between existing tools—so that companies can turn that data into actionable information—is great. DevOps stakeholders are asking for a single-pane-of-glass view of correlated data that provides insight across all stages of the software delivery lifecycle, from planning and application development to testing, deployment, and production monitoring.

This enables all teams adopting DevOps to accelerate from concept to production to improve the velocity and quality of application delivery to the business.  For example, a release manager may see that, although the last release came out on time, it increased service desk incident tickets by 20 percent. Traceability of chains of events and data helps with the implementation of corrective actions and processes.

Why value-stream mapping matters

At the rate things are changing in software development, this should be obvious: As DevOps initiatives and processes evolve, organizations change their toolchains. As you do so,  focus on process improvement and creating a value-stream map of your software development lifecycle across application portfolios, from planning to operations. Establish a baseline and develop KPIs for measurement and continuous improvement and feedback.

Value mapping that abstracts from the specific DevOps toolchains allows organizations to evolve toolchains and still capture the data that drives critical KPIs across the software development and delivery lifecycle. Managers can use these metrics to accelerate collaborative, continuous improvement and feedback processes, and initiatives like control points, quality gates, audit-readiness, and fast-fix and rapid-response abilities.

The industry has come a long way in bringing together developers with other organizational stakeholders to improve the speed and quality of software development. But as DevOps tools meet the siloed needs across the software development lifecycle, organizations  must have a better understanding of their DevOps value stream across that lifecycle.

By having end-to-end correlated visibility across every DevOps toolchain component, organizations can leverage objective metrics and KPIs to ensure that delivery is operational and meets quality SLAs for the business. By prioritizing these considerations, you a will better leverage existing investments and set up your organization for future success in an industry and movement that is ever-changing. 

2016 State of DevOps Report

 

The state of software security: 5 things developers can do nowOpen in a New Window

Of all of the activity and news about software security, three trends stood out in 2016: Software vulnerabilities in Adobe Flash were the most targeted by criminal exploit kits; flaws in a variety of consumer devices allowed a massive botnet to be created that disrupted services on the Internet; and attacks on Web applications were the top source of data breaches.

So what's next? Veracode's State of Software Security report sheds some light here.

Mistakes in software code continue to make both commercial and in-house applications vulnerable to attack, resulting in breaches and network compromises. Nonetheless, companies continue to make missteps in incorporating security into their software development process, according to the software-security firm's report.

More than 61 percent of applications failed to account for the top-10 vulnerabilities on the OWASP Top-10, and 66 percent failed to catch the SAN Top-25 on their first security audit, the report said. But for the first time, there's also good news. Both those failure rates were down from previous years, and Veracode's data shows that top-performing development organizations had vulnerability fix rates that were 68 percent better than average organizations.

The report contains other lessons for companies that want to stay on top of software security. Here are five key takeaways.

White paper: The business of hacking

1. Educate your development teams

Start by creating a software security program for your applications, and work with your development teams on ways to incorporate educations and training into their workflows. Merely getting serious about software security can have benefits. Companies that created a software security program experienced 46 percent fewer vulnerabilities in their code than companies who did not have a program, the Vericode report says.

Security teams should also make sure that they are creating a ongoing effort while not getting in the developer's way, says Tim Jarrett, director of enterprise security strategy at Veracode.

"If you are going to try to implement a formal education for developers, it doesn't work to bolt those on top of a one-time project. But making the effort part of how developers build software makes it possible to extend those services to developers and create an advantage, rather than a disruption." —Tim Jarrett, Veracode.

Overall, organizations that put a process in place to reduce vulnerabilities saw a 1.45x reduction in flaw density, while companies who made training and online learning part of their efforts saw a six-fold decrease in vulnerabilities, according to the report. The results are often seen in practice, says Dan Cornell, chief technology officer at the Denim Group.

We have seen the benefits of organizations moving the security to the left and making those security tools available to developers, Cornell said.

"Handing a developer a security tool is not a recipe for success, but if you can craft the developer's experience using that tool, and better integrate with the developer tool chain, then you have a real increase in the consumption of security testing." —Dan Cornell, Denim Group

2. Don't rely on a single test

No tool will catch every flaw. Dynamic analysis tools can catch one set of flaws, while static analysis tools catch another. Both are good at catching information leakage and cryptography issues, for example, but results differ in other areas of potential security weakness.

Static analysis identified cross-site scripting issues in 52 percent of applications, where dynamic analysis found XSS issues in 25 percent. On the other hand, dynamic analysis caught deployment configuration issues in 57 percent of the applications tested — a class of security vulnerability that static analysis cannot detect.

If you are not testing your code, quite often security issues are introduced, said Alan Sharp-Paul, co-founder and co-CEO of Upguard.

"[It's] not because someone has introduced a vulnerable piece of software or has a piece code that is poorly written—they are actually introduced because they have poorly implemented and poorly configured an existing security or application setting." —Alan Sharp-Paul, Upguard

Finally, companies should conduct regular manual tests to ensure that they are catching the vulnerabilities and security weaknesses not caught by automated tests, says Cornell.

"What the report does not reflect here is the need for manual testing, especially of those high criticality applications," he says. "There are classes of vulnerabilities that you cannot find without manual testing."

3. Look at your open-source components

Developers should keep a manifest of the components they use to create code, whether it's a framework like Struts, a Java component, or an open source library. If you find a vulnerability, the software should be rebuilt. Yet, often those components are not promptly patched when a vulnerability is discovered, says Sharp-Paul.

"It is so easy to have a problem when you are developing a product, do a quick Google search, and find a Java library or a Ruby gem that can satisfy the requirement," Sharp-Paul says. "What you don't realize is that, under the covers, that one decision to add what may be a single include line to one of your files can easily drag in 20 or 30 other components. This is where the risk comes in."

Managing the components that go into applications is a critically important task, Jarrett says.

"No one seems to have their arms around the right way to address changes to the landscape when a component is found to be vulnerable. No one has that baked into their development process in a repeatable way. People don't think of the carrying cost of keeping that library updated." —Tim Jarrett, Veracode

4. Consider DevOps or agile development

The move toward DevOps and agile development practices is one bright spot for developers and security teams. Veracode, which offers a privately accessible, "sandbox" scanning service, saw many developers scan more often—up to 6 times a day. More than nine percent of companies scanned applications more than 15 times during the 18-month study period, and one application was scanned 776 times in 18 months—a frequency that suggests a DevOps mentality.

The results were impressive:

Companies that used sandbox scanning nearly doubled their fix rate.

Overall, the sense is that integrating developers more tightly with operations, and using automation to continuously test is an idea whose time has come, Jarrett says.

"There [are] a lot of things that make up DevOps in people's minds, where you make the same team responsible for building and deployment and operation of the software. But it is the culture of automation that we associate with the security automation of the deployment, and it's there that we are seeing a lot of impact."

5. Create metrics for success—and use them

Different industries need to focus on different vulnerabilities. Software in the healthcare industry, for example, tends to have major cryptographic issues, with nearly 73 percent of first-time software scans revealing a flaw in such programs. Government software tends to have more cross-site scripting problems: 69 percent of those applications had such a flaw, according to Veracode's report. Each industry should compare itself to peers to judge its progress.

Yet, a major metric that should be tracked as early as possible is the fraction of business-significant applications that are covered by automated testing, said John Dickson, principal at the Denim Group. 

"Even the guys who are good at security do not have  100 percent app portfolio coverage. So the first definition of victory is that you have 100 percent coverage—at least of the important apps." —John Dickson, Denim Group

Baby steps for secure software development

The report found that software security is moving from "bad" to "not all that bad," but there are still major issues. At the same time, the little things are still plaguing security. For example, more than on third of applications had hard-coded passwords, the study showed. Another 39 percent used broken or risky encryption algorithms, and one in six mixed trusted and untrusted data. In addition, many developers are not vetting all of the open-source and commercial libraries built into their software.

Overall, while security problems continue to plague software development, the general trend appears to be slow improvement. And many companies are doing software development right.  

"It can be disheartening for security teams and development organizations to continually hear the ongoing drumbeat of breaches. So we want to give a ray of hope and let them know that there is something that they can do to improve security and protect adjacent those breaches." —Tim Jarrett, Veracode

The best companies did much better at fixing vulnerabilities in their software than average firms. The strongest core developers—those who tested 20 applications or more—fixed 64 percent of vulnerabilities, compared to 38 percent overall. For smaller development programs, the difference was even more stark—high performers fixed 56 percent of their vulnerabilities, versus 13 percent for the median. 

"We continue to see most software passing a common sense policy, and we are also seeing a lot of organizations going in and fixing a lot of the vulnerabilities," says Jarrett. "And so we are seeing how far can you go when you set your mind to improving the quality of your software, and what can help you fix more vulnerabilities."

White paper: The business of hacking

Image credit: Flickr

 

APIs and automated testing: Go integrated for the best of both worldsOpen in a New Window

Modern IT applications are becoming more distributed, mobile applications integrate with back-end systems through standardized interfaces, Internet of Things (IoT)-enabled devices communicate with each other and third-party services, and IT service providers are exposing parts of their data and services through APIs in order to generate additional cash flow (a phenomenon known as the API economy.)  

So what does that all mean for development teams? Performing automated integration tests at the API level is rapidly becoming an indispensable step in the overall development and delivery process, since malfunctioning or underperforming APIs results in difficulties in integration, a lower rate of adoption of the product or service and, ultimately, a loss of revenue.

Sadly, many software development teams and projects completely overlook the API layer when creating and executing automated tests. Too often, these teams create application components and accompanying unit tests, and then resort directly to end-to-end user interface-driven automated tests (for example by using tools such as Selenium WebDriver) when they want to determine whether multiple application components work correctly once integrated.

Writing and executing automated tests at the API layer should be an integral part of your overall testing strategy for distributed applications. Here are the benefits successful API testing adoption provides for testing and software development processes, and how you can get there. 

Continuous testing: A practical guide

Avoid "Big Bang" integration testing

While integration testing has been part of the test automation pyramid for as long as the model has existed, it's also been the most overlooked layer in functional test automation. All tests that exceed the scope of an individual unit, and therefore can't be covered by unit testing anymore, are often executed through user interface-driven, end-to-end test scenarios.

There is a place for end-to-end tests in any test automation approach. But while end-to-end tests can be seen as the ultimate integration test, where all components come together, having too many of them leads to test suites that take unnecessarily long to execute, and that are hard to maintain and keep stable.

It's often possible to test significant parts of an application's technical and business logic through an API. This can be a RESTful or a SOAP-based web service meant to expose data or logic to the outside world, or an internal API used for the sole purpose of gluing different application layers together and creating a good separation of concerns.

API test automation: the best of both worlds

Here are some reasons why API-level test automation can rightfully be considered the best of both worlds:

Increased scope compared to unit tests

Unit tests focus on the workings of individual components or small groups of components within a single application or application layer. But issues in distributed applications often occur where the scope of one application (layer) ends and the next one starts.

You will not find these issues with unit tests, but API-level integration tests are designed to verify whether components interact as designed or requested. As the ability to properly integrate with external components becomes more important, your need for a proper API testing strategy will increase.

Test environment management is a potential issue when integration testing in distributed applications. Getting all components in place at the same time, as well as provisioned with the desired test data, can be a complex task—especially when you develop components within different teams or even different organizations. In such cases, you can use approaches such as mocking, stubbing, and service virtualization to perform integration testing extensively, even when critical dependencies are hard to access on demand.

Increased stability and speed of execution compared to end-to-end tests

When you compare integration tests at the API level with end-to-end, user interface-driven tests, API tests have a narrower scope. That's because they focus on integration between two components, or application layers, whereas end-to-end tests cover all components and layers of an application or distributed system.

API tests make up for this loss of scope in two areas, though. The first area where API tests outperform end-to-end tests is in execution speed. Since end-to-end user interface-driven tests require firing up an application or browser, you spend a lot of test execution time waiting for screens or web pages to load and render. To make matters even worse, much of the data that gets loaded is often insignificant for testing (think, for example, of ad banners on a web page, unless, of course, these are the subject of the test).

API tests are generally built up out of individual request-response interactions (for example in JSON or XML format), and these result in less overhead, faster execution times, and therefore in shorter feedback loops for development teams.

The other factor in favor of API tests is their inherent stability. User interfaces tend to change, due to their dynamic nature, as a result of advanced front end frameworks, or due to rapid change requests from users or other stakeholders. But APIs, especially when exposed to third parties, usually have a more stable interface.  As a result, tests require less maintenance and produce fewer false negatives due to outdated tests.

Get serious about your API-level testing

Any serious testing and test automation approach need to include API-level integration testing, but getting started can be daunting for those with no prior experience. Since APIs can cross component or application boundaries, the tests are often regarded as out of scope for developers, which leaves the responsibility to—you guessed it—testers. Whether that's a good or a bad thing, especially when considering the transition towards agile development teams, is beside the point. For better or worse, if you're a testing professional, it's in your wheelhouse.

So where do you start? One complicating factor is the absence of a user interface that you can use to access and test the API. Fortunately, you'll find myriad tools available to assist testers—even those without test automation or programming experience—in the writing and execution of useful, maintainable API tests. One such tool, REST Assured, is a Java-based DSL (Domain-Specific Language) that you can use to write readable and maintainable automated tests for RESTful APIs, even if you don't have much experience in object oriented programming.

Take a closer look at including test automation at the API level. I guarantee that you'll reap significant benefits in your testing and software development process.

Want to know more? A great way to get started is to drop in on my online presentation, "Testing RESTful APIs with REST Assured," at the Automation Guild online conference, where I'll demonstrate how to use REST Assured to write readable, powerful and maintainable tests for all of your API testing needs.

Continuous testing: A practical guide

 

5 mobile security issues that should worry every developer in 2017Open in a New Window

In 2016 mobile has moved forward aggressively to become the primary medium for engagement for consumers, globally.  But this year has also seen a huge number of security risks: Apple reacted to its first major safety issue with Xcode (the vulnerability is called XcodeGhost), and a denial-of-service attack brought most of the Internet to its knees last fall. This time the attack didn't come from servers and PCs, but from commands sent from millions of infected mobile devices.

So, with more than 2.2 billion mobile users now active worldwide, what will be the significant threats that developers and IT Ops will need to manage in 2017? Here are five you should be tracking.

SANS 2016 State of Application Security Report

1. The threat from cheap, non-upgradeable, Android phones

Android is not going anywhere. Demand for Android will continue to grow in 2017. The primary paths of growth will be in emerging Asian countries such as China and India. Indeed, it is expected that more than 200 million people in the two nations will buy their first mobile device in 2017. The area of rapid growth for Android devices will be in African countries. Cashless payments systems are common practice in Kenya and other nations, and for this reason, it makes sense for cheap Android phones to become familiar.

The key word is “reasonable.” Companies are developing Android devices for less than $25. Today you can go to PriceBaba.com and choose from over 600 phones all under $75 and the cheapest at just $12.54. The challenge with these low-cost phones is that the manufacturers do not design the phones to upgraded. It does not matter if Google comes out with a new version of Android, these cheap phones do not change.

The way my team is addressing this challenge is two-fold:

  • Require that all phones your organization supports use the Google Play App Store and do so exclusively. You have a much lower chance of running a virus or malware app if the app is coming from Google’s own servers. Tell your users not download apps from other Android App Stores that are not run by Google.
  • For enterprise use, choose phones that support Android for Work. This may surprise you, but, many of the less expensive Android phones do support Android for Work. Coupled with a mobile device management or enterprise mobile management software, and you have a secure way to deliver enterprise data to mobile devices.

Android continues to be a center of innovation for Google. Android 7.0, or N (for Nougat), is a big step up for Android, providing a foundation on which Google will build in 2017.

The most significant leap for Android in the next 12 months? Android, which has finally found a place in the enterprise, will be more modular, and easier to manage regardless of the manufacturer of the device on which it runs. Expect to see the first signs that Google is finally solving the problem of Android fragmentation.

2. Android Instant Apps security: Wait for it...

The new implementation of Android Instant Apps is breaking down the wall of mobile apps and mobile web. But, do Instant Apps follow the same breakdown in security that Microsoft's notorious ActiveX Plugins did for the desktop web? What do you need to do to embrace Instant Apps securely? My team is taking a wait and see-approach. Instant Apps were only introduced last June at Google’s I/O conference.

The concept of Instant Apps is devilishly cool: When the user with an Android phone visits a Web site that can run on an app (such as Amazon.com or Netflix) only the bits they need to execute the app will install.

Think of Instant Apps as just-in-time delivery for the mobile world. After you leave the site, the app disappears. The benefit is that app developers can leverage the most useful discoverability of the Web without the need to go to an app store. There are also benefits for emerging markets, where cheap devices mean tiny memory, and users need to swap apps in and out as needed.

The problem, as Google readily admits, is that Instant Apps are currently a half-baked solution. Wait until the end of 2017 to see if Instant Apps receives the security protections needed to be successful on corporate devices before you open the floodgates.

3. Protect yourself from rise in mobile-based cyberattacks

In 2017, mobile devices will be the vector of choice for committing denial-of-service attacks. How do you prevent the same type of attack on your network? The challenge, fortunately, can be solved by implementing an enterprise mobility management system. EMM is an evolution of mobile device management (MDM) software, but it includes additional services, such as cloud-based identity-as-a-service (IDaaS), features, endpoint management, and enhanced security features for the apps you develop.

The rapidly evolving world of cyber warfare means that you must always review and update how you manage mobile devices on your networks.

In organization we plan to establish a global governance group with the mission to understand and implement mobile apps that both improve the productivity of our employees and protect us from malicious attacks. The group takes the view that at attack is going to come, so we must prepare now.

4. The rise in IoT devices means more data, communications, to secure

The Internet of Things, IoT, continues to explode. Sensors and micro-devices are everywhere. How do you secure the data being passed between these devices?

Fortunately, the IoT is not quite a new as everyone would like you to think. Before there was IoT, there were machine-to-machine (M2M) sensors, and before that you had client-server.  The difference is that IoT devices and sensors are much smaller.

The approach my organization is taking to protect ourselves from malicious attacks on our IoT devices is to build in security deep into our systems from the get-go. Don’t use new IoT services; work instead with established, production-grade services, such as AWS for IoT, and Microsoft Azure IoT.

Leverage hardware that's already certified for different security levels, and follow app development best practices that comply with security standards when, for example, using Bluetooth low energy to connect sensors to an app on the user's phone. Do this, and your IoT plans for 2017 can focus on driving solutions rather than fixing security vulnerabilities.

5. Social commerce security: Get professional help

How do you provide secure connectivity for platforms within platforms? Working with Social Commerce is, frankly, a little daunting. On the one hand, you have platforms such as iMessage, released with Apple’s iOS 10. Apple has applied its standard approach to software security to iMessage. Brilliant.

Elsewhere it's the Wild West out there, and WeChat is a prime example. The best advice I can give when dealing with the security implications of social commerce is to work with a partner who has experience in this field.

Stay focused

In 2017, Mobile vendors will continue to build on many of the technologies introduced in 2016. You'll see new technologies such as virtual reality, but these will remain on the fringe of broader adoption for most of the coming year. Don't be distracted.

Your focus, from a security standpoint, should be on the services and hardware vulnerabilities of the 2.2 billion mobile devices out that may be trying to connect to your mobile commerce sites, read your email or gain access to your company's intellectual property.

Image credit: Flickr

SANS 2016 State of Application Security Report

 

Best of TechBeacon 2016: 10 app security stories you don't want to missOpen in a New Window

You're never finished with application security—ever. You can design in all the security controls you want into your software, follow every capability maturity and software development model out there, and test the daylights out of all your apps. But at the end of the day, you are never done. There’s always something you overlooked, or left behind, or that crept into your code creates an exploitable vulnerability.

SANS 2016 State of Application Security Report

TechBeacon’s top 10 security stories of 2016 cover the range of issues and trends that will help you get focused on what you may have missed so that you can move forward, with better app security, in the coming year.

57 open source app sec tools: A guide to free application security software

Security must be an integral part of any application development process; you can't just bolt it on as an afterthought at the end of the cycle. But integrating it into your development and delivery agenda doesn’t have to be expensive, thanks to a slew of free open source application security tools. TechBeacon's Mike Perrow offers this handy guide to the best of them.

5 emerging security technologies set to level the battlefield

If there’s one thing that security professionals don’t lack, it's security tools. In recent years, security vendors have flooded the market with a vast array of products and services designed to protect against every conceivable threat out there, and then some. But do you know which tools will matter the most in coming years? TechBeacon contributor John P. Mello reports on five emerging technologies that could level the playing field.

How to hack an app: 8 best practices for pen testing mobile apps

Whether you like them or not, mobile applications are not going away. Users will continue to download and use them in the enterprise, without regard for the security implications. That means it’s up to you to perform penstration testing to ensure that the apps people use don’t pose a risk to enterprise security. Johanna Curiel, co-founder of Ossecsoft, offers a set of recommendations for pen testing mobile apps.

Pen testing cloud-based apps: A step-by-step guide

Penetration testing is a good way to unearth vulnerabilities in software. But it is one thing to pen test on-premise applications and quite another to pen test applications that run in the public cloud. In addition to the technical challenges, you'll face legal obstacles. David Linthicum, senior vice president at Cloud Technology Partners, explains all hurdles you need to overcome when conducting pen tests on your cloud-based apps.

DevSecOps: 9 ways DevOps and automation bolster security, compliance

Contrary to what some might believe, DevOps practices aren't incompatible with information security best practices. In fact, if done right, DevOps can bolster application security by helping to identify and mitigate security issues earlier in the development lifecycle.  DevOps can also help speed up the automation of information security functions and services. Electric Cloud CTO Anders Wallgren explains how.

State of app security 2016: Most common vulnerabilities, top trends 

Developers and security experts have acknowledged the need to bake in security during development, not bolt it on at the end of the process. The Open Web Application Security Project, and other efforts, have led to some progress in this area. But a lot of work remains to be done in making security an integral part of the application development lifecycle, reports contributor Jaikumar Vijayan.

Cloud app security: How not to fail

Software developers tend not to think of themselves as responsible for security. That’s a mistake. Trends such as the movement to DevOps and CloudOps, and the growing need for organizations to enable authentication at the application layer, are driving the need for cloud app developers to become experts in security. David Linthicum offers advice on the high-level concepts that developers need to focus on if they want to succeed at cloud app security.

32 app sec stats you should be tracking

Most organizations manage a mix of Web, mobile, open-source and cloud applications, and each environment presents its own set of security challenges. That's why it's important to keep an eye on the latest trends and practices in each realm. Did you know, for instance, that most organizations plan to spend more on application security in 2017 than they did last year, and that near 8 in 10 use open source security tools? Jaikumar Vijayan reports on 32 app sec trends that you should be watching.

4 ways to exploit microservices architecture for better app sec

The microservices approach to software development enables faster and more frequent updates, and mitigates some of the challenges involved in ensuring that different development groups work and release in tandem. But are you aware of all of the security issues associated with microservices? Do you know why security professionals react to microservices with so much trepidation and skepticism? Bernard GoldenCEO of Navica, lays it all out.

6 application security lessons every team should study

One of the first dictums of application security is to never trust users to behave in a secure manner. Other fundamentals you need to keep in mind at all times include never having hard-coded credentials in your applications, and not forgetting that you are ultimately responsible for the security of not just your own apps, but third-party software as well. Security Journey's Chris Romeo describes the six app sec lessons all security teams should study.

SANS 2016 State of Application Security Report

 

Best of TechBeacon 2016: Performance revs upOpen in a New Window

Trends like exploding mobile app use and test automation tools are transforming the role of performance testers and QA staff everywhere. The field continues to offer plenty of opportunities for career growth—for those who know how to adapt and respond to the changes that are happening around the discipline.

Continuous testing: A practical guide

TechBeacon’s top 10 performance stories of 2016 cover the biggest trends in this space.

Web performance testing: Top 12 free and open source tools to consider 

There’s little use having a really good web application if it doesn’t perform as it should in the real world. Metrics such as fast load times, browser- and client-side performance, and server-side request handling are all vital to ensuring good web application performance. AppDynamics' developer evangelist Dustin Whittle provides a handy list of open source tools you can use to test web performance.

6 common test automation mistakes and how to avoid them

It's not terribly difficult to automate a test process for your software. The problem is, applications have a way of changing from under you. The code you ship today will look quite different from what you ship in six months or a year from now. If you don’t evolve your testing tools to keep up with the morphing nature of application environment, you will run into problems. Matthew Heusser, managing consultant at Excelon Development, lists the most common mistakes organizations make when automating their testing processes.

Mobile app testing: When to use real devices versus emulators 

Testing professionals face tradeoffs when using both emulators and real devices for mobile app testing purposes. Real devices, for instance, are needed for testing app performance, while emulators are good for initial quality assurance purposes.  There are other benefits and disadvantages to both methods. Do you know what they are? Will Kelly reports.

Selenium 3.0, 4.0, and 5.0 roadmap finally unveiled 

Selenium 3.0, the newest version of the open source web browser automation tool, will ship year end. Forget the fact that it’s been three years since the people in charge of Selenium announced the version: There’s still plenty of excitement in the developer and automation tester communities for it. TechBeacon’s Mitch Pronschinske speaks with a testing engineer at Finnish development firm BITFACTOR Oy about the significance of Selenium 3.0 and future versions.

6 top open-source testing automation frameworks: How to choose 

Why build when you can use open source instead? Developers and software engineers have a vast array of open source tools from which to choose—some good others less so—for almost every conceivable need. And so it is with test automation frameworks. Multiple test automation tools are available in the open source community to help make your code reusable, maintainable and stable. TestTalks' Joe Colantonio lists six of the best.

The future of software testing: How to adapt and remain relevant  

A recent IEEE article raised questions about the continued need for human testers in the software development process. The author argues that human testers are not only unhelpful, but detrimental to software development. The reality, writes Matthew Heusser, is that if you know how to adapt and understand why and how the changes are happening, you’ll be able to thrive in the emerging new world.

The 7 soft skills every QA tester needs  

Guess what? It turns out that all of those so-called "soft-skills" that you tend to list at the bottom of your resume—things like your communication abilities and knowing how to play well with others—are very important these days.  Often it's the skills that hiring managers tend to underrate and overlook that matter the most. Michael Cooper, chief quality officer of healthcare IT Leaders and Run Consultants, offers up the most important soft-skills he looks for when hiring QA staff.

What makes a good QA tester? 4 KPIs essential to software testing 

So you think you know your job as a software tester. And you believe you have the technical chops, the communication skills, and the attention to detail necessary to ensure that your organization’s software products meet whatever quality standard they might be required to achieve. But do you know the metrics and the KPIs that matter to your organization when evaluating the effectiveness and quality of testers? HPE's Ori Bendet has the lowdown on the metrics that matter—and those that don’t.

Switching careers in QA: From manual testing to automation development 

Some people believe that manual test engineers are an endangered species. If you are a manual tester, and you're looking to break into the testing automation space to stay relevant, here are a few things you need to keep in mind, starting with the fact you’ll be doing a lot of actual coding. T.J. Maher, an automation developer with Adventures in Automation, outlines from first-hand experience just what you can expect when making the switch.

9 metrics that can make a difference to today’s software development teams

Metrics matter in software development, but only if you tie them to specific business goals. Otherwise, all you're doing is measuring things just for measurement’s sake. Steven Lowe, principal consultant developer at ThoughtWorks, highlights nine metrics that, when measured accurately, will help you make incremental improvements to your production environment.

Continuous testing: A practical guide

 

Best of TechBeacon 2016: Mobile shifts gears to apps-firstOpen in a New Window

With enterprise mobility management practices maturing, discussions around mobile adoption have finally evolved beyond BYOD and the different implementation options available to organizations. Instead, the focus is on strategies for building, managing, improving and evolving mobile applications and services.

Mobile Analytics Playbook: A practical guide

TechBeacon's top 10 mobile stories of 2016 capture the biggest trends in this space.

iOS 10: The 10 big changes that will affect enterprise users

Apple introduced several significant changes and feature updates in iOS 10, the latest version of its mobile operating system. If the organization you work for is like most others, there’s a good chance that many of your users have iOS 10 running on their iPhones and iPads. But are you ready for it? Have your apps been upgraded to support the new iOS? Do you understand how to accommodate all of the privacy and security improvements that Apple has built into the operating system without losing your ability to manage the devices? Matthew DavidSenior Manager of Mobility Center of Excellence at Kimberly-Clarke, has some pointers.

The top 6 reasons mobile apps crash: How to best avoid Murphy 

All it takes is a few crashes, freezes or delays in load times for users to uninstall your mobile apps from their devices. Studies show that users have little tolerance for mobile software that doesn’t perform to their expectations. Some of the biggest pitfalls include bad memory management, poor exception handling, and inadequate testing, reports Erik Sherman.

40 leading Android developers to follow on Twitter 

One of the keys to keeping on top of the skills and tools you need to do your job well as an Android app developer is to listen to the experts. Fortunately, social media tools offer a great way to do that. Many of the leading Android developers use Twitter to share news, ideas and tutorials. Following them is a good way to stay abreast with the latest and the greatest, says TechBeacon's Mitch Pronschinske, who lists 40 Android developers you absolutely need to follow on Twitter.

How to hack an app: 8 best practices for pen testing mobile apps 

Users have a tendency to download mobile apps while giving little consideration as to how secure, or not, they are. Studies show that the vast majority of mobile users blithely assume that their apps are adequately secure, and don't hesitate to use them in an enterprise setting. If you want to mitigate the risk of attackers exploiting weaknesses in mobile apps on users' phones to get at your enterprise data, perform penetration testing on them. Johanna Curiel, co-founder of Ossecsoft, gives the how and the why.

3 hottest cross-platform mobile dev IDEs 

An integrated development environment can make it much easier for app developers to build applications for any of the major mobile platforms—Android, iOS and HTML. Early cross-platform tools, such as Titanium and PhoneGap, have failed to live up to their early promise. But that doesn’t mean there aren’t others. Matthew David has the lowdown on three of the most promising cross-platform tools for mobile app development.

Web-native mobile app frameworks: How to sort through the choices 

Web native mobile application frameworks reduce the need for developers to learn Java, Objective-C or Swift in order to write a native application for iOS or Android. Frameworks allow developers to build mobile applications using the web applications with with they're most familiar, so long as they know the right ones to choose. Maximiliano Firtman, author of  High Performance Mobile Web, describes how to sort through the choices.

Top 4 ways to add single sign-on to enterprise mobile apps 

Implementing a single sign-on to mobile apps can ease identity management headaches for enterprises that have to deal with a growing mobile workforce—which is pretty much everyone these days. But mobile technologies add a layer of complexity that traditional desktop SSO technologies can’t handle. Matthew David describes the best options for mobile SSO.

How the IoT is creating today's hottest tech job: Edge analytics

If you don’t know what edge analytics is, it might be a good idea to get familiar with the technology. The emerging Internet of Things (IoT) is once again redefining traditional notions of the network edge, and is driving a need for skills that can understand and interpret all the data being generated by the devices that operate there. Whether you are still early in your career, or just looking to broaden your market appeal, consider developing new skills in this hot area, says contributor Christopher Null.

7 secrets of the mobile app beta-testing masters 

Beta testing is one of the best ways to get critical feedback from users on a soon-to-launch mobile app, but beta testing programs can be a big problem if you have thousands of testers to shepherd through the process. Contributor Will Kelly reports on the strategies you can adopt to run a successful mobile app beta.

How to choose the right MBaaS: Firebase, CloudKit, or Kinvey? 

One way to offload some of the headaches associated with managing modern mobile services is to use a mobile backend-as-a-service. MBasS vendors provide a slew of handy mobile services, including authentication, storage, push notification, analytics, and ad management. Do you know who the top players are, how much their services cost, or how they can improve mobile service delivery? Matthew David has the answers.

Mobile Analytics Playbook: A practical guide

 

Best of TechBeacon 2016: New technologies that redefined IT OpsOpen in a New Window

Cloud computing, DevOps, containerization and serverless computing are just some of the trends that have been transforming IT operations and service management functions in 2016. Runtime environments have become increasingly complex, exposing limitations in familiar and long-cherished practices for IT services management, such as ITIL. The need for IT Ops to evolve and adapt its  practices to adapt is especially urgent where legacy infrastructures are involved.

TechBeacon's top 10 IT operations stories of 2016 tracked these trends—and offered practical advice for moving forward.

One year using Kubernetes in production: Lessons learned

A frequent caveat you’ve probably heard when using Kubernetes for container cluster management is that it is not quite production-ready. That semtinment could be applied not just to Kubernetes but to clustering tools, such as Docker Swarm, during much of 2015. Since then, however, these tools have been evolving at lightning speed. Paul Bakker, Software Architect at Luminis Technologies, reflects on his organization’s experience using Kubernetes—the good and the bad—and why he thinks the tool is more mature than you might think.

Why the new IT4IT Reference Architecture is a game changer

 IT operations management and service management professionals know just how complex runtime environments can get. The venerable set of practices captured in the ITIL is no longer enough for managing the business of modern IT. The Open Group’s IT4IT Reference Architecture, released last October, is designed to address the changing requirements of IT operations, and has already garnered considerable vendor support. Daniel WarfieldSenior Enterprise Architect at CC and C Solutions, explains what IT4IT offers and why you need it.

The state of containers: 5 things you need to know now

Containers have become the thing. From virtually no market share to speak of in 2015, Docker now runs on more than six percent of all hosts. Adoption has increased five fold in a single year, and development organizations that try Docker have tend to adopt it very quickly. Do you know all the things that you need to consider to successfully leverage container technology? SVP of Cloud Technology Partners, David Linthicum, highlights the five most important areas.

The essential guide to serverless technologies and architectures 

Applications built using microservices are flexible and scalable, but don’t do especially well on legacy distributed computing infrastructures. Serverless computing offers a way out by eliminating the need for developers to worry about underlying physical infrastructure and systems software. While the model is set to play a critical role in the enterprise, it is not suited for all use cases. In this thorough assessment, Peter Sbarski, Vice President of Engineering at A Cloud Guru, and author of Serverless Architectures on AWS, shares tips to help you determine when and where going serverless makes sense.

4 myths about containers and continuous delivery—It doesn't get easier 

All of the excitement around containers has led many to believe that containers somehow automatically turbocharge continuous delivery pipelines. The reality is that containers help, but you still need to do a lot of hard work to achieve continuous delivery.  Speaker and author Todd DeCapua highlights four of the most common misconceptions surrounding containers and continuous delivery.

Serverless computing: 5 things to know about the post-container world 

In launching its Lambda service for Amazon Web Services (AWS) last year, Amazon has ushered in what many are calling the serverless era. It’s the first computing model that does not require the applications operations team to directly manage the environment that executes and runs the code. Do you know how serverless computing can help your organization? Navica CEO Bernard Golden gives you the low-down on serverless computing and the business case for using it.

3 principles of Infrastructure as Code: What every manager should know 

Building an Infrastructure as Code (IoC) capability is a good way to bridge the gap between the developer and IT operations groups. It gives developers a way to focus on application development without having to worry about the minutia of their physical infrastructure and it gives operations teams the assurance that the code they're sent will run without disruption. Gary Thome, Vice President and Chief Technologist of Converged Datacenter Infrastructure at HPE, describes what managers really need to know about IoC.

How to stay relevant in the DevOps era: A SysAdmin's survival guide

Systems administrators still have a role to play in the DevOps world. But staying relevant becomes harder when the focus is all about speeding application delivery through the merger of development and IT operation groups. Skills like debugging, legacy systems expertise, and knowledge of things such as Python, Perl and configuration management can help, reports contributor Robert Scheier.

Doing DevOps with legacy IT: Driving change from the Ops side

Rewriting legacy applications so they work better in a DevOps world can be expensive, challenging and downright risky. So how do you adapt your portfolio of legacy applications so they play more nicely in an environment where agility, speed and continuous development are becoming ever more important? Using modern tools to automate as much of your IT operations as possible is a good place to start. HPE CTO Jerome Labat outlines this and other ways that operations teams can help adapt legacy environments with the DevOps world.

How to reengineer your IT organization for cloud

Unless you’ve been living under a rock for the past several years, you know that the cloud revolution is here, and that almost every organization has embraced it. But do you know what it takes to be successful with cloud computing? Do you know how to reengineer your development, QA and production processes so you can fully leverage the flexibility and agility of the cloud? Bernard Golden explains how.

 

Best of TechBeacon 2016: DevOps comes of ageOpen in a New Window

With more organizations adopting DevOps practices, discussions around the benefits of continuous development are finally going away, replaced by questions on the availability of tools and best practices for implementing CD.  Several DevOps experts and practitioners in 2016 weighed in on the growing availability of DevOps tools and the lessons to be learned for organizations that have not only dipped their toes in the DevOps world but have achieved large-scale digital transformation.

2016 State of DevOps Report

Here are TechBeacon’s top 10 DevOps stories of 2016.

6 top open-source testing automation frameworks: How to choose

Several perfectly viable open source test automation frameworks are available, so why build your own? Tools like Serenity, RedwoodHQ, Sahi and Gauge can help make your test automation code reusable, maintainable and stable. Be sure to consider these open source frameworks and libraries before needlessly venturing out on your own. TestTalks' Joe Colantonio lists six of the best options.

One year using Kubernetes in production: Lessons learned

Containers and container orchestration tools are great for speeding application delivery and automating the deployment pipeline. But many organizations haven't quite bought into the production readiness of these technologies yet. Paul Bakker, software architect at Luminis Technologies, shares the lessons his organization learned, sometimes the hard way, from using the Kubernetes container cluster management tool in production for one year.

7 steps to choosing the right DevOps tools

There’s a lot to consider if you are planning on adopting DevOps practices in your organization. Automated provisioning, testing, building and deployment are just  a few of the moving parts to consider. For DevOps to work, you need to enable continuous feedback and have he capability to continuously log everything that’s moving back and forth across your development environment. Cloud computing guru David Linthicum draws up on his consulting practice experience to share tips on how to choose the right tools.

Going big with DevOps: How to scale for continuous delivery success

The best way to scale adoption of DevOps practices is to create “pockets of greatness” within your organization to demonstrate the value and the benefits of continuous delivery. Large-scale DevOps transformation doesn’t happen overnight. It takes the right team, the right architecture, and demonstrable success within the current environment. The Phoenix Project author and DevOps Enterprise Summit organizer Gene Kim describes the adoption patterns associated with successful DevOps transformations.

Back to waterfall: When agile and DevOps don't work well 

DevOps may be a game-changer within many organizations, but it is definitely not for everyone. Contrary to what some people might believe, there are situations where the waterfall approach to software development works better. Older systems, such as those based on ISAM and COBOL, and projects involving system design and planning, are arguably the worst fits for DevOps methods. David Linthicum explains why.

What we learned from 3 years of sciencing the crap out of DevOps

Improved stability and throughput are only two of the positive consequences of embracing DevOps principles. Data from an annual survey of DevOps practices that Puppet Labs has been conducting over the last three years shows that continuous delivery improves organizational performance and IT while making life better for technical teams. Nicole Forsgren, CEO and Chief Scientist of DevOps Research and Assessment, explains.

The best open-source DevOps security tools, and how to use them

Nearly 75% of companies will have adopted DevOps by the end of 2016. If your company is one of them, what are you doing about improving code security? Do you have a process for managing the software supply chain, and verifying the security of commonly used components and frameworks? What about vulnerability scanning? Have you automated that process? Robert Lemos rounds up tips on the processes and the tools that can help secure your DevOps environment.

Containers reality check: Why they're still not production-ready

So you’ve been playing with containers for some time now and figure you are ready to use it in a production environment? You might want to reconsider. Just because more organizations are adopting container faster than ever doesn’t mean that the technology is ready for prime time. There are several security, scalability and manageability issues that need to be addressed before containers can become fully mainstream, says 451 Group analyst Jay Lyman.

Containers 2.0: Why unikernels will rock the cloud

Cloud orchestration tools such as OpenStack and CloudStack have enabled organizations to more successfully harness benefits of cloud computing, including infrastructure flexibility and scalability. But little has changed with the nature of workloads in the cloud. They look exactly like the machine images in on-premise systems in traditional data centers. Open source evangelist Russell Pavlicek explains how Unikernels can improve cloud services agility by supporting the implementation of smaller, faster and more secure workloads.

43 free and open-source tools that put the Ops in DevOps

It’s not enough to only have build and developer tools if you want to deploy a full DevOps pipeline: You also need tools that let you create and administer the DevOps production environment. You need tools for configuration management, log management, deployment, monitoring, measurement, and environment management. Excelon Development's Matthew Heusser lists 43 open-source tools that can help you do all this and more, for free.

2016 State of DevOps Report

 

Best of TechBeacon 2016: App Dev's rapidly changing landscapeOpen in a New Window

Change is a constant in the application development field. There’s always something new to learn, some myth that's being debunked, or some trend that is emerging. And so it was in 2016.

Continuous testing: A practical guide

Here is TechBeacon’s collection of the top 10 app dev stories of 2016.

5 emerging programming languages with a bright future

Most software developers are familiar with at least one or two mainstream languages, such as Java, Python, C++ and Ruby. A smaller number have heard of or actually dabble in languages such as Go, Swift and Haskell, each of which have a respectable number of followers, but haven't yet made it to the big leagues. But what about languages like Kotlin and Crystal and Elixir and Elm? There’s a good chance you haven’t heard of them. TechBeacon's App Dev editor, Mitch Pronschinskeexplains why you might want to pay attention to this third tier of languages.

How learning Smalltalk can make you a better developer

If you haven’t considered Smalltalk because you thought it was an obsolete language, you are missing on a good opportunity to improve your skills as a developer. Smalltalk introduced the world to many of the technologies, processes and features that underpin today’s most popular languages. Learning how to use it can give you an edge in multiple ways, writes Richard Eng of Smalltalk Renaissance.

Perl is not dead: It was early web novices that gave it a bad name

Considering the relatively scant respect that Perl gets these days, it’s easy to forget how popular it was in the late '90s and early '00s. But it is a mistake to presume that Perl is dead. Though its demise has been anticipated for a long time, use of the Web language is actually thriving, and demand for Perl skills have remained steady over the last several years. Perl expert Curtis Poe drills down into the reasons why.

Complete guide to the top 24 coding bootcamps

It used to be that you needed to hold a four-year college degree or some kind of formal certification to get your start in software development. Not anymore. Coding bootcamps that emphasize the languages and skills organizations need, can help launch your career in software development, or help you get up to speed on a new language, much more quickly. Here, Erik Sherman delivers a handpicked list of the 24 best coding bootcamps.

14 things developers love to hate

Developers know the feeling. You are concentrating intensely on something, and you're close to figuring out a problem that you’ve been grappling with all day, when someone rudely interrupts your train of thought and you have to start over again. Or, your team has barely completed the core requirements for a project before people suddenly start asking for a slew of changes and updates. What are your gripes? See if you can identify with Mitch Pronschinske's list of the things that developers love to hate.

How terrible code gets written by perfectly sane people

It’s not just bad programmers who write lousy code; sometimes good ones do it as well.  Are your developers forced to put too much emphasis on product delivery, rather than code quality? Are they focusing too much on metrics, and ignoring proven practices? Watch out, writes senior developer Christian Maioli.

Should software engineers be certified?

As important as it might seem to have some sort of a formal license or professional certification for software engineers, the idea is an impractical one. Your local plumber or electrician can be held to a certification requirement by a regulatory body because the work they do is physical. Software developers can be based pretty much anywhere around the world, so you’d need a truly global regulatory framework for a certification standard to work, which it won't, writes Hewlett Packard Enterprise Senior Researcher Malcolm Isaacs.

The top 6 programming languages for IoT projects

There are programming languages and there are programming languages. Do you know which ones are the best fit for your IoT projects?  The “things” that you connect to the Internet are,in a sense, computers as well. But there are important differences between writing apps using Java for your desktop and using Java for an IoT app. Developer Peter Wayner lists some of the best choices.

21 dangerous pieces of code and programming missteps

It doesn’t take much to cause programs to crash and for all kinds of things to go horribly wrong. Sometimes all it takes to delete your entire customer database or to poke a security hole in your software is an errant comma or a missed semi-colon, says Erik Sherman.

A software engineer's guide to data science

As a software developer, you have probably heard the term "data scientist" tossed around quite a bit. But do you know what it really means? Or what data scientists really do? Or how they are likely going to affect your life as a developer? Malcolm Isaacs offers this analysis of what to expect.

Continuous testing: A practical guide

 

Best of TechBeacon 2016: Agile shows age but no less transformationalOpen in a New Window

Fifteen years is a long time in the technology industry, so it should come as no surprise that agile development practices today bear little resemblance to the values and the principles expressed by the group of visionary developers who created the Agile Manifesto back in 2001. There’s little doubt, however, that agile has radically transformed software development practices in good, and sometimes unexpected, ways.

2016 State of DevOps Report

Here's TechBeacon’s list of the top 10, must-read agile stories from 2016.

Uncle Bob Martin: The Agile Manifesto, 15 years later

More than 16 years ago a group of software development engineers met at a ski resort in Utah and drafted what would become the basis of the agile process for software development. Robert (“Uncle Bob”) Martin, one of 17 developers behind the Agile Manifesto, talks with HPE senior researcher Malcolm Isaacs about the impact and the legacy of that meeting.

Scrum vs. Kanban: How to combine the best of both methods

If your organization is like the many that have moved from scrum to the Kanban model of agile software development, you probably know that Kanban makes a better fit for teams than does scrum. Where scrum is somewhat overly prescriptive in nature, Kanban is not so at all, and is only bound by three broad rules. But that lack of structure can be a problem, as agile consultant Yvette Francino explains.

Back to waterfall: When agile and DevOps don't work well

DevOps can be truly transformative, but it's not a silver bullet. Environments that emphasize centralized application deployment, such as the cloud, can benefit from continuous delivery. But there are some situations where the CD model just doesn’t work at all, such as in ISAM and COBOL environments, as well as projects that involve system design and planning. David Linthicum explains why.

"Blameless" postmortems don't work. Here's what does

Much as DevOps teams like to believe otherwise, there’s no such thing as a blameless postmortem. Humans are simply hardwired in such a way that we give voice to uncomfortable and painful feelings by blaming others. So instead of being fixated on blamelessness, try adopting more blame-aware postmortems. The goal should be to have actionable outcomes, says J. Paul Reed.

Large-scale agile frameworks compared: SAFe vs DAD

Agile development practices tend to work well for smaller projects, but are less suitable for large-scale projects that involve multiple distributed teams. The Scaled Agile Framework (SAFe) and Disciplined Agile Delivery (DAD) are two of the most popular frameworks available for large-scale development projects. They offer guidance on coordinating all of the different moving parts in a big development project, especially in the early phases. Here are a few tips for choosing between the two, from Yvette Francino.

Prioritize your backlog: Use Weighted Shortest Job First for improved ROI

In theory, the Weighted Shortest Job First (WSJF) technique makes it relatively easy to prioritize projects that need to be completed. The idea is you assign a value, based on importance, to all of the projects that need approval, and then do some math involving the expected length of each job to arrive at a relative ranking for each. Confused? Excelon Development's Matthew Heusser offers these tips on how best to use the technique to figure out which tasks will give you the biggest bang for the buck.

The 4 biggest challenges in moving to Scaled Agile Framework (SAFe

The Scaled Agile Framework (SAFe) provides useful guidance for large-scale agile projects that involve between 50 and 125 developers. But as with anything that's relatively new, adopting SAFe can be a challenge. Agile transformation specialist Anthony Crain walks you through the biggest challenges: identifying the initial epics and value streams, ensuring code quality, and executing a release planning session.

5 deadly fallacies that can kill your agile implementation  

One of the most common misconceptions surrounding agile is that your team is ready for it and can handle the multi-functional and self-organizing requirements demanded by the practice. These and several other similar fallacies often cause agile projects to fall short of their promise. Comcast’s director of software engineering, Stephen Frein, gives you the low-down on five of the deadliest ones.

You're not agile unless you're using behavior-driven development

Behavior-driven development (BDD) offers a way for product teams to test and validate application performance by keeping the user experience front and center all of the time. Many developers thinks that BDD is pure agile. But while it has been around for 10 years, few have adopted it. One of the biggest problems, says ArcTouch's Eric Shapiro, has simply been getting developers to understand what BDD is all about.

Moving beyond MVP: 5 agile design practices you can't afford to ignore

Agile teams often struggle to go faster, or even maintain velocity, because of a failure to incorporate design in their projects. Many assume that the focus agile places on developing working code somehow eliminates the need for all design and modeling activities. That's not true says ThoughtWorks' Steven Lowe, who lists the five design practices that absolutely must be incorporated into any agile project.

2016 State of DevOps Report

 

AI is the future of ChatOps—and the end of ChatOps as we know itOpen in a New Window

At the peak of any tech cycle, one technology always seems to be the ultimate solution to a problem—which, of course, is never the case. The “next big thing” is always being cooked up somewhere to make the current solution look primitive. ChatOps is no exception.

With the explosion of tools, systems, and consoles intended to improve how we work, ChatOps was a necessary evolution to combat data overload and distil information down to the most relevant, useful components. Rather than manually juggling 20 different tools and systems, ChatOps creates a single, intuitive interface that effortlessly integrates relevant information into the right channels. It becomes a critical link in a chain of tools, people, and processes that allows teams to get their jobs done more efficiently and effectively.

2016 State of DevOps Report

While ChatOps is transforming the way we work, it’s still very much a wild and untamed beast. Implementations range from decent to disastrous, and without a definitive best practices toolkit, the vast majority of organizations must learn as they go.

With time and patience, however, we’ll refine our approach and develop a more standardized ChatOps model. We’ll gain new and wonderful capabilities and UI enhancements, courtesy of artificial intelligence (AI) and augmented reality (AR). This won’t be the death of ChatOps, just the end of ChatOps as we know it today.

How will it all go down? Based on our struggles now, here are my predictions.

1. Organizations will realize that the onboarding of new employees into an unstructured, disorganized chat world is a drag on velocity 

This is especially true for large enterprises, but smaller organizations will find the inefficiencies and bottlenecks equally intolerable. The new standard across the board, regardless of organization size, will be a refined, systematically organized chat model that immediately and contextually brings new users into the loop. While this is particularly critical at scale, smaller teams will also be following specific protocols to make onboarding as smooth and painless as possible.

2. As machine learning ops rises, people will combine the learnings and insights of a single organization’s application with those of peer organizations

The first phase of this trend will consist of massive analysis of past operational issues and service disruptions, and the actions that were taken to successfully diagnose and resolve them. These solutions will become invaluable to operations teams and developers, empowering them to execute pre-vetted resolution recommendations and creating feedback loops to strengthen machine learning ops (MLOps) signals.

As more applications and services start to self-heal, we’ll see the technical indicator of the end of current ChatOps. Its AIOps replacement will start small, but will quickly figure out who to engage for detecting, diagnosing, and resolving problems with minimal human intervention. It will follow a similar exponential intelligence trajectory as other machine learning-powered systems, which start off relatively dumb, get incrementally better, and then appear to suddenly become very good at what they do.

An example of how this might play out would be in the relationships between the different services that form an application. An untrained system could get a basic start from a CMBD or service directory; given the poor track record of those systems, however, it would frequently be incorrect in its understanding of the application. But with access to use transactions, applications logs, network flows, and other data, the machine would get smart.

The progress might be slow at first, but it would reach uncanny levels of accuracy very quickly. We’ve seen this at play with other systems we utilize every day, such as autocorrect on our mobile phones, Google Translate, Amazon recommendation engines, and more.

3. ChatOps will transform the user experience and interface for ops teams 

Machine learning and AR-powered ChatOps interfaces will enable faster user onboarding and more scalable teams, creating a superior experience to command-line interfaces—even those enhanced by helper bots.

Imagine how effective it will be when operations engineers can be visually interrupted for high-priority tasks, and can combine a visual interface with the power of a sophisticated Natural Language Processing (NLP) system to take action. The killer combination of smart audio and visual running in an AR interface will support richer, better forms of collaboration, and ultimately signal the demise of ChatOps as we know it.

The next generation of ChatOps will deliver

As we get better at ChatOps, organizations will reach chat nirvana faster than ever. We will see all kinds of standard integrations, such as video and voice collaboration components, connectors to continuous integration and delivery tools (to drive richer data into operational processes), more intelligent helpers (to automate the channels into, which will bring users), and reporting tools (to query data from lots of IT systems in the middle of a firefight). Data capture and analysis around chat activities will become the norm, allowing organizations to continuously improve the performance and results of operations teams.

In spite of all these improvements, however, the overarching principle of ChatOps will stand resolute: You should use chat to connect separate systems into a single console, and to expedite common workflows.

How would you like to see ChatOps evolve? Share your ChatOps wish list in the comments below.

2016 State of DevOps Report

Image credit: Flickr

 

Event-driven computing: A best practice for microservice architectureOpen in a New Window

Today’s leading-edge applications are capable of dynamic and adaptive capabilities, and that requires you as a developer to use increasingly dexterous tools and supporting infrastructure, including microservices.  You might be asked to build data-centric apps to automatically index documents, as in Google Drive, performing facial recognition for photos, or run sentiment analysis on video and audio newscasts.

All of these applications leverage data in new ways.  And in some cases, the decoration and tagging of data with intelligent metadata has become more important than the data itself. To keep up with continuously evolving needs and expectations, enterprise developers across industries are shifting away from traditional application architectures in favor of more fluid architectures for building data-centric applications.  

Here are several ways that microservices, connected via event-driven techniques, can help you replace the capabilities of older monolithic systems with more flexible and easier to maintain code.

2016 State of DevOps Report

The old challenges of monolithic systems

Key elements that enable the new paradigm are found within tool chains as well as on the underlying infrastructure or platform.  Applications are moving from monolithic paradigms, where single applications are responsible for all aspects of a workflow. While effective for many legacy use cases, monolithic applications have challenges with:

  • Scalability. In many cases, monolithic applications are designed to run on a single, powerful system. Increasing the application’s speed or capacity requires forklifting onto newer, faster hardware, which takes significant planning and consideration.
  • Reliability & Availability. Faults or bugs within a monolithic application can take the entire application offline.  Additionally, updating the application typically requires downtime in order to restart services.
  • Agility. Monolithic code bases become increasingly complex as features are added, and release cycles are usually measured in periods of 6-12 months or more.

How are these challenges being met? To build applications capable of dynamic and ever-changing capabilities, architectures should be composed of smaller chunks of code. Which is why event-driven computing and microservices are gaining in popularity. The relationship between these two things is as follows: microservices should be designed so that they notify each other of changes through the use of events.

Microservices are the way forward: Automation and decentralization

As you know, microservices break more traditionally structured applications up into manageable pieces that can be developed and maintained independently.  Because these smaller components are more lightweight, the codebase for each can be significantly simpler to understand,  leading to a more agile development cycle.

Additionally, microservices are often decoupled, allowing for updates with little to no downtime, as the other components can continue running.

Event-driven computing: Triggering adaptation

Event-driven computing is hardly a new idea; people in the database world have used database triggers for years. The concept is simple: whenever you add, change, or delete data, an event is triggered to perform a variety of functions. What's new is the proliferation of these types of events and triggers in applications outside of the traditional RDBMS.  

Cloud and open source to the rescue

Public cloud vendors have taken notice of this proliferation, and they've offered fundamental building blocks required for microservices-based applications.  AWS Lambda, Azure Functions, and Google Cloud Functions all offer robust, easy to use, scalable infrastructure for microservices.  

These services are also handling the generation of events by various components within their respective ecosystems.  Amazon S3, its object storage offering, enables its Buckets (logical containers of objects) to be configured to trigger AWS Lambda functions whenever objects are created or deleted.  Microsoft Azure Blob Triggers can trigger Azure Functions.  Similarly, Google has Object Change Notification.  

In the open source world, Minio offers events.  Additionally, NoSQL systems such as Cassandra (triggers) and HBASE (co-processors) give developers this same functionality for key-value applications.  On-premises commercial options for ‘event-producing’ infrastructure have historically been hard to find, but offerings from vendors such as Igneous Systems and MapR offer developers tools for next generation applications.

Integration with messaging systems such as Apache Kafka, AWS SQS, and Azure Queue provide the mechanism necessary for feeding those events into a rich ecosystem of decoupled microservices, allowing powerful, dynamic, data-driven pipelines to be built.  As new data arrives, it can be automatically indexed, transformed, and replicated.  In addition, notifications can automatically be sent to systems which can display a dashboard for real-time monitoring and decision making.   

A Google Drive example

Consider an example based on Google Drive, where newly uploaded files cause an event to be generated, which is then passed off to multiple microservices, each responsible for a different function:

  • Index metadata, enabling user friendly search.
  • Index full document text (when applicable), enhancing search.
  • OCR images containing text.

In this scenario, the event-driven object store kicks off all the resulting actions, while multiple decoupled microservices allow for rich processing and decoration of metadata, without impacting object store performance.  These same principles can be applied to facial recognition, as well as the analysis of audio to perform functions like transcription and sentiment analysis.

Why is it important for events to be generated by the underlying platform?  Applications require guarantees that whenever a file, object, or record is committed, there will be an event notification, the contents of which are 100% accurate.  Unlike alternative methods which can be both inefficient and prone to edge cases, the underlying storage platform can more reliably inform the application that the data and its associated metadata has been successfully written, and what the associated metadata was.

And here are two alternatives: 1) having to either write that logic into ingestion code, application writes, or a proxy, or 2) relying on fragile techniques such as log scraping (as is the case with MongoDB and most traditional filesystems).  The former is not readily portable, and the latter can break easily with even the most subtle changes by the platform vendor.  By enabling the underlying infrastructure to handle this heavy lifting, you can focus on the key business logic of your applications.

Are you shouldering too much of the burden?

Many developers are well aware of the shift that is occurring towards event-driven computing, and microservices architecture.  However, what is often less well-understood is that the platform or infrastructure components upon which these technologies are deployed must be capable of generating events and publishing them using open, common APIs.  Developers should not settle for legacy systems which put the burden on them to build this functionality.

2016 State of DevOps Report

 

3 highly effective strategies for managing test dataOpen in a New Window

Think back to the first automated test you wrote. If your like most testing professionals, you probably used an existing user and password and then wrote verification points using data already in the system.  Then you ran the test. If it passed, it was because the data in the system was the same as it was when you wrote the test. And if it didn’t pass, it was probably because the data changed.

Most new automated testers experience this. But they quickly learn that they can’t rely on specific data residing in the system when the test script executes. Test data must be set up in the system so that tests run credibly, and with accurate reporting. 

Over the last year, I’ve researched, written, and spoken coast-to-coast on strategies for managing test data, and the common patterns you can use to resolve these issues. The set of solutions surrounding test data are what I call "data strategies for testing."  Here are three patterns for managing your own test data more effectively. If after reading you want to dig in more deeply, drop in on my presentations on these patterns during my upcoming presentation at the upcoming  Automation Guild conference.

Continuous testing: A practical guide

Three strategies for data testing

Each data strategy has two components: a "creational strategy" and a "cleanup strategy." The creational strategy creates data test needs. The cleanup strategy cleans it up.

1. The elementary approach

I call the approach I described at the beginning of this article  “the elementary approach” because it has no creational strategy. The test automation code does nothing to create the data that the tests use. Likewise, the approach does nothing to clean up data after each test case runs.

While this approach doesn’t work in most environments, nor for with most applications under test, it does serve as a foundation for other patterns. The elementary approach can work in some cases, but those are few and far between. For most of us, we realize very quickly that we must manage the data in the system in order to get the results we want.

For instance, if the data in the system changes because another user (or test case) changes it, then our test fails. If we want our test case to change data in the system and verify that it changed, re-running the test will fail. The same is true if we wanted to run the same test case in parallel—we’d experience a race condition. Test executions compete to be the first to access and change data. One would fail, one would pass. So if the organization values consistent test results, the elementary approach won’t work.

2. Refresh your data source

A common solution to this problem is to reset the data source that the application is using prior to test execution. I call this "the refresh data source approach.”

In between test executions, test automation will reset the data source. That solves the problem of making sure you have the same data in the system each time tests run, provided you refresh the data source with a snapshot containing the data you want. 

But this can be costly. In some systems, refreshing a data source can take hours, or even days. It may also be costly in terms of labor. After all, how many testers know how to reset an Oracle database to a previous state? The technical skills needed to implement this approach may be high.

As with the elementary approach, refresh data source works with some test suites, applications, and environments. The key to implementing it is understanding the team’s constraints and aligning them with goals for the tests. For instance, in the case of a shared system under test (SUT), how will refreshing the data source affect testers on your team? Management may not agree with having 10 testers resting idle for a couple hours a day because of a refresh strategy on a shared system. This doesn’t sound like something that will aid in today’s continuous delivery initiatives.

3. The selfish data generation approach

So the next thought for many is “what if we didn’t refresh the database often, and instead create unique data for each execution of a test case?” I call this “selfish data generation”.

Whereas the refresh data strategy has a cleanup but no creation strategy, this approach has a creation but no cleanup strategy. Consider a test case that creates the data it needs to verify functionality, and where this data is unique. The problem of encountering a race condition on data goes away in this situation because each test has its own unique data to modify and verify functionality. Additionally, the problem of long-running times for refresh code is gone, and your testers don't become idle while those long refresh processes run.

A new problem created by this approach is that data builds up in the system quickly. How big a problem could this be, right? I hear developers say again and again that tests “will never create enough data that it will matter.” And every time I end up at the table with them, in just a matter of weeks, discussing the large amount of data that has built up in the system.

In healthy automated testing environments automated tests run a lot. They run many times while they are developed. When tied into continuous integration systems and run with every commit, the problem is amplified. When every small test case is creating data in the system, the size of the data source explodes.

Selfish data generation so named because the strategy only cares about the concerns of the tests, and nothing else. It doesn’t consider other interests, or needs. It doesn’t consider what may happen when, over the course of a couple months, it has created 500 million users. And it doesn’t consider what data growth does to query times across the application.

What is good about the selfish data generation approach is that it gets all of your tests to run without having race conditions causing false positives in test reports. It is also very good at finding issues within the SUT that arise from varying the data used for inputs.

Getting started

These three strategies are the most basic patterns I’ve discovered. They should pique your interest, serve as a basis for developing a better understanding of test data management, and help you think through what you do with your own test environments. Mix these, and match them. Explore alternatives, such as refreshing specific data and generating other data. Explore whether mocking data sources can accelerate testing efforts. 

As systems become more intertwined, you'll need more solutions to push ahead with testing and test automation. But today you can make a commitment to actively managing test data so that your testing can be accurate, viable, and repeatable. You can find more information in my webinars, specifically the one entitled “Patterns in Managing Your Test Data.”

In January, I’ll be speaking at the Automation Guild in depth about these patterns, and demonstrating test automation code that implements them. I’ll make simple reference implementations of these solutions available to attendees. I hope you see you then. In the mean time, if you have questions please post them below.

Image credit: Flickr

 

Make gradual software deployment risk-free with a real-time CDNOpen in a New Window

For many organizations, making changes in the content they serve may not be an easy slam-dunk. The new content might contain a “buggy” script causing browsers to display annoying behaviors. Or it might simply be new content that you want to unveil gradually to your audiences. In both cases, you want to perform a gradual deployment, a.k.a. AB testing, to test the new content on a small subgroup of your customers.

Organizations have been increasingly looking to content delivery networks (CDNs) for JavaScript monitoring clients. CDN nodes generally cache content separately for each customer group, which means that the content can be versioned and may vary between customers. Since it is versioned, do you have to change the JavaScript version occasionally according to your customers’ requirements? The answer, of course, is yes, and this is where it starts getting risky.     

“There can be no great accomplishment without risk.” —Neil Armstrong

Why not minimize the risk through gradual deployment of updated static content? Here's how to cut the risk with an easy, gradual deployment using a real-time CDN.

Continuous testing: A practical guide

How to implement gradual deployment

The following instructions show gradual deployment on the generalized CDN level. You will need to adapt the general process to the specific capabilities of your own CDN. After the general CDN instructions below, I give an example of a specific implementation our team has used.

Step 1 - Define A/B:

Define version A (current version) and version B (next version to test).

Step 2 - Define success criteria and appropriate logging:

Gradually deploy version B, while making sure to log possible errors, monitor performance and define success criteria. For example, success can be defined as:

No errors and maximum performance.

Step 3 - Serve version A to X% of the users and version B to Y% of them:

You should use a cookie indicating which version needs to be served. If it exists (from previous session), use it and if not, generate a A/B value according to the required percentage and set it on the response.

To make sure the CDN’s cache nodes will cache version A separately from version B, You should use the Vary header. Normally, CDNs use the request path and the host header to find an object in their cache. The Vary header tells the cache which other parts of the request (header names, separated by a comma) are also relevant for finding the cache object.

An additional point to mention: you may want to assign different percentages to each customer group and be able to dynamically change the percentage. In this case, you should check if there is some kind of dynamic updatable config table on the CDN level so that the group percentage will be extracted from it.

You'll find it instructive to create your own Gradual Deployment/AB Testing advanced logic using a real-time CDN.

Fastly-specific implementation example

The following instructions are geared to the Fastly CDN. If you use another CDN, you will need to adapt the general process to the specific capabilities of that particular CDN.

Let's start by setting the header, X-VersionAB, indicating which version to serve:

# Subroutine getting executed when request is received
vcl_recv {
if (req.http.Cookie:VersionAB) {
# Cookie exists, use it to populate header
set req.http.X-VersionAB = req.http.Cookie:VersionAB;
} else {# Cookie doesn't exist, generate random header value according to percentage
if (randombool(10,100)) {
set req.http.X-VersionAB = "B";
} else {
set req.http.X-VersionAB = "A";
}
}
}

VersionAB is the cookie indicating which version needs to be served. If it exists (from previous session), use it and if not, generate a new random value according to the percentage being passed as a first parameter to the randombool function.

On the other hand, if you had no cookie to begin with, you would want to create it with a three-day expiry.

sub vcl_deliver {
if (!req.http.Cookie:VersionAB) {
add resp.http.Set-Cookie="VersionAB=" req.http.X-VersionAB "; expires=" now + 3d ";";
}
return (deliver);
}

At this point, you’ve made sure that:

  • Every request reaching your backend will have X-VersionAB set to the correct version;
  • Every response being delivered will set the cookie when needed.

All that remains to wrap this up is to make sure the CDN’s cache nodes will cache version A separately from version B. This is exactly what the Vary header is designed for. Normally, CDNs like Fastly use the request path and the host header to find an object in their cache. The Vary header tells the cache which other parts of the request (header names, separated by a comma) are also relevant for finding the cache object.

sub vcl_fetch {
# Append X-VersionAB header name to the vary header
if (beresp.http.Vary) {
set beresp.http.Vary = beresp.http.Vary ", X-VersionAB";
} else {
set beresp.http.Vary = "X-VersionAB";}
}

Now the hashing function, which determines the key for the cache entry, will take your version header into account. An additional point to mention: you may want to assign different percentages to each customer group and be able to dynamically change the percentage. Edge Dictionaries come to the rescue. (Full documentation can be found here an here.)

To make a long story short, using the REST API you can create and update an edge dictionary for maintaining state (key, value table) across VCL versions.

For the following steps, we assume you have read how to use edge dictionaries and you were able to create one for your service.

Here is an example of an edge dictionary:

table gradual_percentage {
"Group1": "10",
"Group2": "20",
"Group3": "30",
}

You can fetch the value for the relevant group and use it to set the right X-VersionAB value.

# Subroutine getting executed when request is received
vcl_recv {
set req.http.X-Group-Num = <Group extraction logic>
set req.http.X-Percentage = table.lookup(gradual_percentage, req.http.X-Group-Num);
if (req.http.Cookie:VersionAB)
{
# Cookie exists, use it to populate header
set req.http.X-FastAB = req.http.Cookie:VersionAB;
} else {
# Cookie doesn't exist, generate random header value according to percentage
if (randombool(std.atoi(req.http.X-Percentage),100)) {
set req.http.X-VersionAB = "B";
} else {
set req.http.X-VersionAB = "A";
}
}
}

Replace <Group extraction logic> with your own relevant logic. You can use a header or a value extracted from a cookie, for example. From there, you obtain the percentage value from the dictionary (“gradual_percentage”) for the relevant group and use this value (after converting the string to a number) for the randombool function.

If you are not certain the dictionary already contains a value for your group, you can add a check for this case and set req.http.X-Percentage to 0 so version A will be served.

Your work is done, and now you can use edge dictionary functionality to make life easier.

These are the key features of edge dictionaries you can now use:

  • CRUD REST APIs for manipulating group percentage.
  • Batch API for updating several groups together.
  • You can also take it to the next level and allow your customers to set their own percentage via your API. For instance, PerimeterX uses Fastly API after proper validations, authentication, and authorization.

Gradually rolling out changes in your product can be risky, but it is possible when you take into consideration the following:

  • Create A and B versions of your updated content, then define success criteria.  You serve a set percentage of users with version A, and the remainder with version B.
  • Use the Vary heading to make sure the CDN’s cache nodes will cache version A separately from version B.
  • Finally, use the updatable state with the REST API (edge dictionary) to make a few tasks easier, such as updating several groups together.

Share your experiences with using CDNs for software delivery in the comments section below.

Continuous testing: A practical guide

Contact Us

Vivit Worldwide
P.O. Box 18510
Boulder, CO 80308

Email: info@vivit-worldwide.org

Mission

Vivit's mission is to serve
the Hewlett Packard
Enterprise User
Community through
Advocacy, Community,
and Education.