Discovery Analytics: It’s Not Hacking, It’s R & D, by Bill Franks

DiscoveryAnalytics

It’s Not Hacking, It’s R&D

Bill Franks has clarified the valued use of R & D as a reframe for pre-planning, analysis and pre-testing with this article. https://www.linkedin.com/pulse/discovery-analytics-its-hacking-rd-bill-franks

I spend a lotBillFranks of time these days talking with companies about the need for a formal approach to enabling what is often called “discovery analytics” or “exploratory analytics.” What I find is that many people have a fundamental misunderstanding of what discovery analytics is all about. There is one analogy that I have found to be effective in getting people to better understand the concept. In this blog, I’ll walk you through that analogy.

IT ISN’T AIMLESS HACKING!
Many people get very concerned when I begin to discuss discovery analytics as being not fully defined, constantly evolving, and remaining fluid. They tell me that what I’m saying sounds a lot like a mad scientist sitting down running random experiments in the hope of finding something useful. I do not espouse such an approach, I can assure you!
On the contrary, a discovery process should always start with a specific high priority business problem in mind. There should also be at least a general idea of how to address the problem effectively through analytics after some initial brainstorming. At that point, a discovery process is started to explore how well our ideas actually do address the problem. Typically, a discovery process involves one or more big unknowns:

  • We may be addressing a totally new business problem
  • We may be utilizing one or more new and /or largely untested data sources
  • We may be making use of analytics techniques that we haven’t used in the past

In short, while we think a proposed approach has merit, we really don’t know for sure how well it will work. The only way to find out is to dig in and see what we find. As a part of that process, we may well adjust our approach across any number of dimensions. The final solution we find may be somewhat different than the path we started down, but it will be found by remaining focused on the core problem we start with.

WE’RE REALLY TALKING ABOUT RESEARCH & DEVELOPMENT
If analytics is going to be a strategic component of your business, then you need to invest in analytics just like you do for other core products and services your company offers. Discovery analytics aren’t mindless hacking any more than traditional research and development activities are. People accept that an R&D team will have to be creative and try many paths to identify a winner. At first glance, some say that this is acceptable in a traditional product setting but not in a discovery analytics setting. I suggest that it is important to realize that the two are the same.
One of our high-tech customers takes only take a handful of new products to market in any given year. However, massive investments with many trial and error experiments happen behind the scenes to get those products ready. At the other end of the product complexity scale, quick-serve companies only bring a few new menu items to market each year. In their test kitchens, however, a never-ending stream of new recipes and ingredients are explored to arrive at the winners that make it to your local restaurant.

Discovery analytics are much the same. A variety of attempts will be made, of which only a few will turn out worthy of a full deployment. But, those that are worthy will have a high level of strategic value if efforts are focused in the correct areas of the business up front. Some ideas may not work out at all and will have to be abandoned. I’m sure there are also many computer chips or chicken sandwiches that never made it out of the R&D process either. Not every effort will turn into a winner, but working through the losers is the only way to get to the winners. In fact, if your analytics do not generate some losers, something is wrong: either they aren’t being made known, or the analytics team is not continuously pushing the limits of innovation.

RAMPING UP YOUR ANALYTICS R&D FUNCTION

It is critical to redirect discussion on discovery analytics toward the R&D parallel. People are much more comfortable with R&D because it is viewed as a rational, scientific, disciplined approach to developing new products and ideas. Discovery analytics is just that. Of course, it takes ongoing effort to ensure that any R&D function stays on the right path. With correct oversight and leadership, however, there is no reason your organization can’t reap large benefits from discovery analytics over time.

The best thing about building up an analytics R&D function is that it will self-propel itself forward. Start by getting commitment for a limited number of human and technology resources to address only a handful of critical business problems. Once you prove it works, slowly ask for more resources to attack more problems. Over time, you’ll be able to grow a stable, accepted analytics research and development function. You just have to push through the initial misunderstandings and the resistance to the unknown.

I think you’ll find, like I have, that very few people will argue against the merits of research and development as a business endeavor. The first step in the process of enabling discovery analytics at your organization is to ensure that people understand that you’re just talking about a different type of R&D effort. You’ll still have a lot of work to do, but at least people will be willing to listen to you make your case and hopefully give you a chance to show what you can do.

Discovery Analytics: It’s Not Hacking, It’s R&D, by Bill Franks

DiscoveryAnalytics

It’s Not Hacking, It’s R&D

Bill Franks has clarified the valued use of R & D as a reframe for pre-planning, analysis and pre-testing with this article. https://www.linkedin.com/pulse/discovery-analytics-its-hacking-rd-bill-franks

I spend a lotBillFranks of time these days talking with companies about the need for a formal approach to enabling what is often called “discovery analytics” or “exploratory analytics.” What I find is that many people have a fundamental misunderstanding of what discovery analytics is all about. There is one analogy that I have found to be effective in getting people to better understand the concept. In this blog, I’ll walk you through that analogy.

IT ISN’T AIMLESS HACKING!
Many people get very concerned when I begin to discuss discovery analytics as being not fully defined, constantly evolving, and remaining fluid. They tell me that what I’m saying sounds a lot like a mad scientist sitting down running random experiments in the hope of finding something useful. I do not espouse such an approach, I can assure you!
On the contrary, a discovery process should always start with a specific high priority business problem in mind. There should also be at least a general idea of how to address the problem effectively through analytics after some initial brainstorming. At that point, a discovery process is started to explore how well our ideas actually do address the problem. Typically, a discovery process involves one or more big unknowns:

  • We may be addressing a totally new business problem
  • We may be utilizing one or more new and /or largely untested data sources
  • We may be making use of analytics techniques that we haven’t used in the past

In short, while we think a proposed approach has merit, we really don’t know for sure how well it will work. The only way to find out is to dig in and see what we find. As a part of that process, we may well adjust our approach across any number of dimensions. The final solution we find may be somewhat different than the path we started down, but it will be found by remaining focused on the core problem we start with.

WE’RE REALLY TALKING ABOUT RESEARCH & DEVELOPMENT
If analytics is going to be a strategic component of your business, then you need to invest in analytics just like you do for other core products and services your company offers. Discovery analytics aren’t mindless hacking any more than traditional research and development activities are. People accept that an R&D team will have to be creative and try many paths to identify a winner. At first glance, some say that this is acceptable in a traditional product setting but not in a discovery analytics setting. I suggest that it is important to realize that the two are the same.
One of our high-tech customers takes only take a handful of new products to market in any given year. However, massive investments with many trial and error experiments happen behind the scenes to get those products ready. At the other end of the product complexity scale, quick-serve companies only bring a few new menu items to market each year. In their test kitchens, however, a never-ending stream of new recipes and ingredients are explored to arrive at the winners that make it to your local restaurant.

Discovery analytics are much the same. A variety of attempts will be made, of which only a few will turn out worthy of a full deployment. But, those that are worthy will have a high level of strategic value if efforts are focused in the correct areas of the business up front. Some ideas may not work out at all and will have to be abandoned. I’m sure there are also many computer chips or chicken sandwiches that never made it out of the R&D process either. Not every effort will turn into a winner, but working through the losers is the only way to get to the winners. In fact, if your analytics do not generate some losers, something is wrong: either they aren’t being made known, or the analytics team is not continuously pushing the limits of innovation.

RAMPING UP YOUR ANALYTICS R&D FUNCTION

It is critical to redirect discussion on discovery analytics toward the R&D parallel. People are much more comfortable with R&D because it is viewed as a rational, scientific, disciplined approach to developing new products and ideas. Discovery analytics is just that. Of course, it takes ongoing effort to ensure that any R&D function stays on the right path. With correct oversight and leadership, however, there is no reason your organization can’t reap large benefits from discovery analytics over time.

The best thing about building up an analytics R&D function is that it will self-propel itself forward. Start by getting commitment for a limited number of human and technology resources to address only a handful of critical business problems. Once you prove it works, slowly ask for more resources to attack more problems. Over time, you’ll be able to grow a stable, accepted analytics research and development function. You just have to push through the initial misunderstandings and the resistance to the unknown.

I think you’ll find, like I have, that very few people will argue against the merits of research and development as a business endeavor. The first step in the process of enabling discovery analytics at your organization is to ensure that people understand that you’re just talking about a different type of R&D effort. You’ll still have a lot of work to do, but at least people will be willing to listen to you make your case and hopefully give you a chance to show what you can do.

Inverting the Test Pyramid, by Joel Masset

Great Article from Joel Masset, Global Head of Product Assurance and Chief Quality Officer in the Financial Services Industry

invertedpyramid
Inverjoelmassetting the Test Pyramid
by Joel  Masset
https://www.linkedin.com/pulse/inverting-test-pyramid-joel-masset

Although most organizations focus on what it means for development teams to be agile, what that changes for them, and how to make it happen, testing is still very important in an agile delivery model. It is even more critical than in a traditional model.

I want here to highlight the big change that agility represents from a testing point of view. This is highly inspired from Mike Cohn’s well known test pyramid. However, I thought it was worth sharing this once more, as it is so close to what many major software delivery organizations need to go through nowadays.

In a traditional software delivery approach (waterfall, V cycle, iterative V cycle, washing machine…), all activities are serialized. Business requirements, design, development, test, debugging, […], release. The focus is put on finding bugs.

InvertedPyramid1

This leads to very little test effort to happen at the beginning of the cycle, and massive focus at the end, after developers have integrated their components into a product.

Roles are very clear and distinct. Developers’ role is to code and debug, testers’ is to find bugs, and check they are fixed.

Automated tests developed after the integration has happened will be essentially based on the UI. Their development cost is very high and since they are very fragile, maintenance costs are very high too. Any update to the code is likely to break the tests, which will require automation code to be updated prior to being ran. Since most of the testing focus is happening so close to the expected delivery date, there is a lot of pressure on the team to give its test campaign results. In most cases, it results into massive manual testing. In the best case, automated test development will happen in parallel, so that these can run post the release, but very regularly, the team will just give up on automating.

As a result, automation is extremely painful, costly, and inefficient, with a very low Return On Investment. The manual testing in this model is very costly, and happening under stress.

Don’t get me wrong, I am not saying this model cannot work, I have myself been using it for years. But oh my god, is it stressful…
Agile testing takes the opposite approach, by inverting the test pyramid.

AgileAutomatedPyramid

The focus is now put on preventing bugs from existing. This means that most of the test effort will happen at the beginning, at the code and UI level. In this context, automation will be extremely efficient. Unit tests will be written alongside development, or even before the actual code is written or updated (this is the Test Driven Development model).

Acceptance tests will be created at the API layer level during development and integrated in the continuous build and integration tools and processes.

At the time the developer checks the code in, it has already been successfully tested. Post-build and integration tests will automatically be ran and results checked.

Since all those tests happen during the sprint, a failed test will immediately lead to a code change fixing the issue. Not only the cost of fixing a problem is much smaller at this stage, but the risk associated to re-opening code that was already pushed, days or weeks ago is gone. The team then rarely has to switch context from the current version of the code to an older one. This is a much more reliable process.

Some testing will still be required after the integration steps. Some will still be automated at the UI level. This represents very boring and repetitive tests like links checks, or basic user workflow steps. The essential part of testing at that stage is exploratory testing. Based on end user workflows, these are tests that only a human person can do. But the effort will be limited here.

In this new approach, you end up with developers and testers both testing and coding together. A much higher synergy, and very little stress, because the delivery process is made much more reliable.

Results are impressive on the Quality of the software, the predictability of the releases, and as well on the motivation of the teams.

This change requires a big cultural change obviously, hence very high senior leadership, this will certainly be the focus of a future article…

Bill Fulbright
770-880-0959
Bill@2100solutions.com

Platform Service Transformation: Entry 2 – Platform Architecture and ReDesign: Phase One

ArchitecturalView  

Platform Architecture and ReDesign: Phase One


2nd Entry

The purpose of this project is to unbind and unfetter a great service and product from debilitating, confusing and circuitous code. As in many cases, the code for products, is a result of many years of different code practices, developers, loss of methods in favor of “quck-fixes” to support Production. The result is a hairy, overgrown, complex, and code with nothing but dead ends in its future (in other words, non-scalable, with too many hard coded restrictions)

The Challenge is to re-design and construct a new infrastructure that IS scalable, flexible and elegant, without the User Experience changing. It may be improved, but for a more intuitive work flow’s sake, retaining the functionality on which the customer base is dependent.

The Tool Stack being used in current project for code re-factoring and re-design:
In order to transform a platform riddled with inefficient code, and work flow paths, we are consolidating DB Calls and Posts, using the following to create new service based middleware (to replace the PHP assignments): Cloud based iterations for DEV, DEVOPS, R&D, Ruby, Java, RAML for new APIs, ElastiSearch, Kibana, Jenkins, Cucumber, PHP (unraveling and re-assigning), Version One, Apache, Oracle, customized code generation, common sense, and Top-Down development and sensible deliveries for each sprint. Each of the teams (2 – Dev, UI/Biz) own their parts, and there are also intersects between the team functions for each team member. Some more than others.

CloudArchitectureMap
Example: NOT Actual

While we have spent a good bit of time re-engineering the product, we have realized, that we are limited in our demos to reflect the present level of functionality based upon the existing product. In order to fully service the needed functionality at the needed scale, we will begin development from the Top Down, rather than the discovery based Bottom Up approach. The Bottom Up approach has been useful in revealing many of the flaws, design complexities and inefficiencies, and workflows. This realization also provides the reality of NOT developing certain functional sections such as Security – already complex, into a flawed model. Once the present demo is completed to demo for execs, the functionality based upon the Bottom Up approach will be retained, but developed from the Top Down.

This shift in approach will allow implementation of complex features from a fresh start, designed to scale, and efficient.

Next Entry coming soon!

True Vision is Disruptive and Effective!

image

What a great quote!

Especially if you are introducing a new and innovative approach that has nothing to do with the methods of the present “box”, AND it provides a quantum leap to the same desired outcome. I’d like to explore this in light of today’s project approaches and business processes and alternative transformational methods.

For example: In a stagnant environment, the same things are done repeatedly and unsuccessfully. IMHO, this puts the enterprise at the effect of their own myopia. Introducing not change, but a completely transformative approach outside the existing paradigm – as a separate track or model is the least risky and most effective way to lift the operation into a less clunky and more streamlined and elegant solution.

When an organization’s myopia prevents it from effective decisioning and operations, an outside change agent is required, not to fix what is broken, rather provide a transformative and efficient model. A well run Agile approach as a pilot or even a remote team providing the desired outcome from outside the operational model is the embodiment of Mr. Ford’s quote mentioned above.

Understanding the needed outcome and requirements and producing them in a “clean” environment not affected by the usual operational approach can produce astonishing results. So when considering investing in fixing what is broken, consider understanding what has contributed to that state, and implement innovative solutions that cannot be argued away by those clinging to the status quo.

Likely, you may find this approach provides:

1. more efficient coding, such as reducing 100,000 lines of code to less than 10,000 using clusters of powerful coding tools working in a congruent stack

2. consolidation of repetitive database calls that complicate and affect performance,

3. the use of a small, multi-disciplinary group of developers, business architects, leaders, UIX developers and testers all accountable for creating to a common outcome. In short, an Agile team of experienced resources.

4. Clearer vision from leadership as a result of asking the obvious, but often un-asked questions

5. Common sense observations that provide long term solutions instead of living with complex and innovative repairs to existing conundrums.

These are but a few considerations.

I think Mr. Ford said all this in one simple sentence!

Vendor Management Service Transformation: Entry 1 – Re-Factoring, Businss Architecture

metodo_pratiche_agile_chart_manifesto_itaEntry 1    3.22.2015

I recently was invited to join a project for a Vendor Management Service (VMS) in Mid-March 2015. The project is to provide in Phase 1 a re-factoring of our Client’s code by replacing the hardcoded middleware with services, and adding new client facing features, along with a new UI. All needing new documentation, of which there is now verylittle.

Our client provides a turnkey service for managing IT vendors who need to outsource their HR, Recruiting, Accounting, and Financial Services for this aspect of their business.

My role is to document the present legacy Business Processes, the new Processes, the new Services and the newly re-factored APIs, processes and added features by providing the Requirements, Use Cases, Workflows and Processes.

The leadership on this project is not only setting the pace, but shining a bright light into the future vision for this client, and for the VMS industry. It is a privilege to work with them.

 

Presently, I am awash in the project ramp-up and assimilation of the many layers, features and infrastructure required to successfully launch a program as complex as this.

We have two teams: one is onsite with FTE EEs of the customer, and a fly-in contingent of our leadership. The other is an offsite team in Atlanta, that is providing an AGILE based component for delivery of the new code which provides the new Service APIs and integration; as well as Leadership, Business Architecure, Process Articulation & Documentation. The client will observe the present SDLC based approach for now.

We have defined the primary users and their roles, the features – both new and old – associated with their roles. The functionality of these features some of which, for now, will remain as legacy, while others are new. There are around 400 of these. Some are Epic, requiring some of the features to support the workflows.

For the new and replacement pieces (in AGILE) we have defined the primary “Day in the Life” from the “need to the ass in the seat” E2E process to establish a critical Happy Path. Variations and UCM will be modeled based upon this primary structure.

The software and coding will be the same, albeit updated. The specific usage of the system will vary based upon the needs and systems of client-users of this system.

The SDLC pieces for things like the DATA, and QA will be driven from the client sites.

I will be updating this log at various points along the way…. so STAY TUNED!!

Bill Fulbright