Digital Magna Carta time?

Recently I seem unable to avoid reading material on security risks associated with the use of technology.  It is certainly a good thing that the topic has a growing profile as that can positively drive upward awareness of the risks.  However, I do worry that many articles only tend to articulate the risks and remain silent on the potential benefits arising from technology enabling our lives.  Writing about the dangerous downsides of how easily Internet of Things (IoT) context devices can be hacked will definitely get attention.  This is fine if we also gain the value of people being more aware and then engaging on an informed basis with technology and related information security risks.

I noticed recently that the New York Stock Exchange (NYSE) had sponsored and circulated a publication called Navigating The Digital Age: The Definitive Cybersecurity Guide (for Directors & Officers) to every NYSE listed company board member.  This was produced in partnership with Palo Alto Networks and a wide and impressive range of contributing writers and organisations.  I found it an excellent read.  What I particularly liked was the recognition clearly conveyed that people as much as technology (or process) are at the heart of both the information security threat and the defences.   The need to educate both the consumers of technology enabled solutions and those operating and defending them was well articulated.

The criticality of all of us being aware of the risks to our data and the steps we can take to mitigate them is becoming clearer to most people.  The publicity around corporate hacks like Sony and the recent press around the cyber “front” in the current challenging situation in the Middle East are hard to avoid.  However, in recent weeks the questions I have been asked most often around information security have been related to stories on many and various IoT devices that have allegedly proved vulnerable to hacking.  People have raised many concerns with me on a wide range of devices from connected car systems to house alarms to healthcare wearables to pacemakers.   I remember reading, but annoyingly cannot now find, an article which used the term “Internet of Nosey Things” in its discussion of the type and value of data involved.

Digital Law - shutterstock_120641284 (2)

Indeed the ISACA 2015 Risk Reward Barometer declared that its 7000+ contributors saw IoT as being the prime area of information security concern.  The survey reported that over 70% of respondents saw a medium to high likelihood of attack via such devices either in the consumer or in corporate context as they become more common in the workplace.  This concern is then compounded by the (ISC)2 Global Information Security Workforce Study 2015  which forecasts that we will simply not have enough security skilled people in the workforce to provide adequate defences.  They see the gap being as many as 1.5 million security workers too few by 2020.

If that forecast proves true then we need to have placed information security at the centre our technology design process.  In fact if you look at the automation and machine to machine implications of IoT then we clearly have to ensure our defences are not operator dependent.  The imperative to automate defences is nicely highlighted by the HP Cyber Security Report 2015.  This is a sobering read of results from interviewing 252 companies in 7 countries.   What particularly stood out in the material is that the time to recover from a cyber-attack has risen from 14 days in 2010 to 46 days in 2015; that the number of successful attacks reported has risen by 46% since 2012; and that the average cost of cybercrime per participating company was $7.7m.

So having started saying I was wary of scare mongering articles on information security I have now drifted towards the negative perspective.  It is quite hard to avoid when considering this topic I fear.  As the benefit delivered by technology is huge and alluring so does it comes risk and as ever some people don’t see a problem with acting illegally to make money.  In that sense this challenge is nothing new and we have a good track record across many societies of working out how to protect ourselves (eventually?!) from such threats.

Magna Carta - shutterstock_287752943 (2)

Perhaps we do indeed need a digital age Magna Carta or its mirror incarnations across the globe.  The content of this updated Magna Carta was built on the input of over 30,000 people having begun as an initiative focused on school children.  The British Library site hosting the debate has lots of other excellent material worth reviewing.  The good news is that the debate is still open as to what this digital age Magna Carta should state. Why don’t you go and place your vote?

Images via Shutterstock,com.

Beyond Infancy?

The trend over recent years has been for technology services to be provided on an “as a service” (aaS) basis.  The flexibility of an always available, highly and immediately scalable service for which you subscribe on a short term basis with fees directly linked to usage based metric(s) is compelling.  The drive to adopt aaS offerings by consumers and businesses has been one of the signs of our world going digital.    I do recognise that concerns remain for some primarily in relation to data portability and security.  However, we have moved beyond the initial virtualised server offerings to more sophisticated platform and software based services.  Access methods have expanded beyond web browsers on computers to the current world of “apps” from a multiplicity of devices consuming “platform services”.  So all very empowering and exciting but where do we go next?

I am not alone in thinking that the future “aaS” offerings will be shaped by the Internet of Things (IoT).  The services we consume in the near future will be delivered based on many diverse real time data streams.  Data will be drawn from existing stores and combined with data generated by an ever larger set of sensor equipped devices, human centric or automated machine to machine (M2M). The scale of this connected device proliferation is forecast to be vast.  CISCO predict 50 billion connected devices by 2020, Gartner 30 billion.  IDC believe that by the end of 2015 across the globe we will have connected just under 5,000 devices per minute for each of the 365 days.  The data from these devices will be used to enable and inform the digital services that will pervade our lives.

These digital services will be highly personalised, based on real-time analytics and contextual.   The personalisation will be all pervasive and extend beyond data unique to an individual.  Services will constantly learn and tailor themselves to each user or user group based on a vast stream of real-time received and analysed data that is context informed.   It is the contextual aspect that fascinates me the most.   Truly contextual services will be able to anticipate our needs and deliver unique experiences specific to us or defined user communities.   That ability for the digital service to anticipate will come with need for careful design to avoid slipping into being annoyingly intrusive.

The privacy aspects of digital services are complex and fascinating.  They could seriously inhibit how services evolve over the next 5 to 10 years.  Many commentator are alarmed at how lightly many people trade data privacy to access a desirable service.  Some argue that currently services tend to be relatively isolated and so the risk of unexpected data leakage is acceptable.


This would seem to be at best naive to me.  However, I do accept that refusing to accept provider terms relies on there being viable equally alluring alternatives or strong willpower!  The security of the service platforms and volume of connected devices enabling these digital services is also clearly key.  Hence the increasing focus on this aspect of the enabling IoT wave in the last 12 months.  An interesting report published by HP in 2014 looked at 10 connected devices in the market today to enable various smart home services and all failed their security testing.  A similar outcome was reported by the BBC in testing they conducted on various domestic smart devices where their hackers were successful.

If we think the current aaS offerings are flexible and add value then the good news is that we have only begun to see the potential.  Digital services are just leaving the infant stage and moving becoming toddlers.  Any parent will know that this is both an exciting, challenging and at times worrying period as the toddler becomes adept in their new skills.  Learning to walk is one thing, just remember that after that comes running!

The post has been previously published on the Business Value Exchange.
Image via  Shutterstock.

Customer Experience – Digital Imperatives? (Part 1)

I recently need to make changes to some mobile phone contracts for family members.  Our contracts were with two different mobile phone providers and one performed far better than the other.   The positive experience was with O2.  The website was clear and easy to use, the “instant messaging chat to advisor” service was quick and convenient, the human looking after me was engaging, efficient and extremely helpful.  A truly positive customer experience.  The other provider who I think should remain nameless provided an experience that had none of those attributes.  Customer experience in the digital age is often characterised as our demanding ever more flexibility in how we engage, ever more efficient and enjoyable transactions,  ever more rapid delivery and the truism of everything being immediately available at all times.   I held my engagement with O2 late on a Sunday night so I think I ticked a few of those characteristics!

The view of the consumer has arguably never been more important or more easily shared.  Over recent years the value of a referral or positive review has become increasingly important with access to many different sources at our fingertips.

I realised recently that I now automatically use reviews on sites like prior to booking any accommodation, sorting the available options by customer review scores.   Many market analysts assert that 75% of all purchase decisions are now preceded by a review, even if the review is online but the purchase is made in-store.   Of course in this context the trust in the review source and it offering the collation at sufficient scale for the scores to be meaningful is critical to creating trust in the data integrity.

At the heart of these enhanced customer experiences is the dynamic combination of mobile devices and cloud computing.  It is clear that the pace of change is stressing the capability and indeed budget of many IT organisations.   Someone recently pointed me at some excellent Forrester material on this challenge.  They use the term “Business Technology” and argue that successful CIOs need to lead their organisations from traditional style operating models to managing business technology outcomes and not IT assets.  Given a deal of this useful information is behind the Forrester paywall this Computer Weekly article is an excellent articulation of their argument, “Forrester – Manage Business Technology Outcomes Not IT Assets“.    At the same time I also recently came across an excellent article entitled 5 Metrics for Digital Success by Aaron Rudger.  I particularly liked his suggested five key metrics for the digital age: responsiveness, latency, third party app impact, load testing metrics and finally competitor benchmarking.  I will not do justice to the article here but it is well worth a read.

Digtial Service - shutterstock_203618407 (2)

Regardless of what you measure the challenges and the opportunities for IT teams is going to continue to evolve at pace.   A common message from analyst articles is that over the next five years the combination of the Internet of Things, pervasive cloud computing and big data will enable organizations to offer services which are able to learn and evolve, are contextually aware and able to react in real time to change.  So your strategy needs to ensure that the design is user-centric, that it provides for a high degree of personalisation and contextualisation and that you are able to rapid iterate to innovate.

Customer experience is fundamentally about the quality of the interaction between the consumer and the company offering the service.  The intent is to build a relationship of trust and value with the consumer so they are both a repeat buyer but more important an advocate for you.  There is as deal of research you can find that explores what transforms a buyer into a brand advocate.  The quality of the product or service is clearly key but is it sufficient?  Are there other factors being assessed by your customers when they decide whether to post that glowing review on your service?   I would argue that there are a range of criteria explicitly and implicitly being assessed every time someone experiences your service.  It would seem to me that the value judgements being made are becoming more sophisticated and perhaps based on some interesting research I recently read far more holistic that we might expect?

This post was previously published on the Business Value Exchange.
Image via

Far Too Few!

Like many of us I tend to notice articles flagging up the next big skills set demand wave.  Recently an article caught my eye proclaiming that now is the time to have cyber security skills.  A recent study called Global Information Security Workforce 2015 released by (ISC)2  reports that there will be an estimated 1.5 million people too few with skills in this key area.  The study has been conducted annually since 2004 reporting a workforce shortage at each time, however it seems that the supply to demand gap is now accelerating.

The importance of this workforce aspect in relation to cyber security demands is also highlighted in a report I recently read by Accenture entitled “Intelligent Security: Defending The Digital Business“.  In it they summarise the most common issues challenging organisations in having an effective response to cyber security, namely:

  • Linking security and business.Tie security programs to business goals and engage stakeholders in the security conversation.
  • Thinking outside the compliance (check) box.Go beyond control- or audit-centred approaches and align with two key elements: the business itself and the nature of the threats the enterprise faces.
  • Governing the extended enterprise.Establish appropriate frameworks, policies and controls to protect extended IT environments.
  • Keeping pace with persistent threats.Adopt a dynamic approach including intelligence, analytics and response to deal with a widening variety of attacks.
  • Addressing the security supply/demand imbalance.Develop and retain staff experienced in security architecture planning and design, tools and integration to increase the likelihood of successful outcomes.

Supporting the report they also have a very good infographic that is worth a visit “Take A Security Leap Forwards“.

The point Accenture make that compliance to a given industry’s cyber security regulations is only a good starting point particularly resonates.  This is a discussion I have had many times over recent months with colleagues.  Meeting compliance requirements is only the minimum level to achieve.  It also often tends to be associated with relatively static time based audits rather than  real-time monitoring and indeed adaptation.  It is pretty clear that the sophistication of externally originated cyber-attacks evolves extremely rapidly.  The points attacked are those where defences are strongest and in the hyper-connected digital world securing the perimeter or specific “citadels” within that perimeter is challenging.   The defenses need to be real-time, automated, holistic and appropriately funded to both meet the risk and reflect the asset value.

It seems to me that the last year or so has seen a growing understanding of the importance of the Chief Information Security Officer (CISO) role. Based on hearsay it seems that they are having an easier task in obtaining adequate funding for their function.  Of course the tooling needs to match the sophistication and evolutionary pace of the cyber attackers.  The CISO needs to be enabled to engage with new and disruptive technologies as their emerge so they can define a layer defensive strategy that does not become perceived as a blocker but rather adding value and an absolute necessity.  Constructive, frequent and open access to the senior leadership team of any business is critical for a CISO that is empowered to bring real value to their organisation.  Often the decision points will be difficult as concepts such as innovation, agility and pace are confronted directly by valid concerns on information integrity and protection appropriate to the value it represents.

cyber security - shutterstock_204844114 (2)

As ever in the world of technology there is money to be made by vendors providing tooling that enables appropriate levels of security in the digital world.  A recent Financial Times article by Hannah Kuchler highlighted that the cyber security market is now estimated as a $15bn-$20bn over the next three years.  The article reports that venture capital funding flowing into this area exceed $1bn for the first time in the first quarter of 2015.  Apparently the venture capital funding for the whole of 2014 for cyber security was $2.3bn, itself an increase of 33% over 2013.  The money is certainly flowing into the cyber security space.  Given the recent experiences of Sony and the publication of information the hackers extracted by WikiLeaks it does start to seem rather unsurprising.

All that said I do think many organisations face their biggest cyber security risk from threats that are far from new to us.  The first is the often depressing factor of your own company’s people doing something that in hindsight they would fully accept as being dim.  This is often despite the act exposing the corporate information being heavily and frequently communicated as unacceptable.  However, in my career to date the threat that has caused me most issues has been obsolete software.  Obsolete software that is not listed in the IT asset database and might be lurking under a desk or part of the “shadow IT” world procured on a credit card and forgotten.  This software is no longer being actively patched for security vulnerabilities by the vendor.  It is so easily missed and the first time you become aware of its existence might well be a very unfortunate moment.  Sounds trivial compared to the sophisticated cyber attacker but it does represent an easy access point for them.  There are many examples of obsolete software that has been around long enough to be very well embedded.  The next one I think might create a few issues for many of us is MS Windows Server 2003 which goes out of support in mid July 2015.  Might be worth another check to be sure you will have no surprises in late July?

Image via

Just Connect?

Is 2015 the year in which the much discussed Internet of Things (IoT) is becoming mainstream?  I was prompted to muse on this question by watching a friend remotely check and then reset the temperature of his home via their smartphone from our restaurant table.  Also that same evening saw me extolling the benefits of my health wearable device and demonstrating how to review my statistics via an app on my smartphone.  This is certainly different from the initial smart sensors on goods and within warehouses that help track stock levels and triggered replenishment orders.  My first encounter with IoT was in the smart meter space in the energy sector.  This is where meters enhanced with sensors are deployed to enable the providers to remotely monitor energy usage real-time and use that feedback to optimise their delivery model.

IoT 2 - shutterstock_254834209 (2)Indeed defining the term IoT can be problematic.  I like this definition from a McKinsey article, that it is “the networking of physical objects through the use of embedded sensors, actuators, and other devices that can collect or transmit information about the objects. The data amassed from these devices can then be analysed to optimize products, services, and operations”.   In 2011 when IoT first hit my radar I remember many articles from analysts predicting that by 2020 the market for connected devices would have reached somewhere between 50 billion and 100 billion units.  Generally analysts today seem to be talking about a reduced but still material 20 billion or 30 billion units by that date.

To enable that scale to be reached we need to look beyond the “Things” and indeed even the connectivity aspect.  Ultimately the old mantra of “it is all about the data” is at the heart of the key ingredients required.  It is not just about getting the data to a store in the cloud.  It is about doing so in a way that reflects the information privacy and security dimension within a framework of enabling technology standards.  I don’t think we will realise the promise if we end up with an IoT that is more the “Internet of Proprietary Things”.

I picked up on the proprietary angle in an article by Matt Honan in the magazine Wired:  “Apple is building a world in which there is a computer in your every interaction, waking and sleeping.  A computer in your pocket.  A computer on your body.  A computer paying for all your purchases.  A computer opening your hotel room door.  A computer monitoring your movements as you walk through the mall.   A computer watching you sleep.   A computer controlling the devices in your home.  A computer that tells you where you parked.  A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned – and sharing it all with your friends…. The ecosystem may be lush, but it will be, by design, limited.  Call it the Internet of Proprietary Things.”

Many see a darker side to the IoT vision.   They see a world where you are constantly tracked, monitored and the data about you monetised without your permission on a massive scale.  Indeed some go as far as seeing the IoT as enabling a far more effective and efficient surveillance by the state, yet with the added twist that we seem to be volunteering to have it.

Cyber Surveillance - shutterstock_95308294 (2)

The threat seen is that we end up being monitoring by every device in our lives from our cars, to our household white goods, to a massive range of smartphone or wearable type apps and to the more understood spend trail we leave with credit and debit cards.  This set of data points will then be correlated, analysed and without the relevant protections on privacy sold on to businesses without you being explicitly aware and agreeing.

There are a number of articles around that counter this point by making a link from IoT in this regard to social media.  I think the point they miss in doing so is that social media is for those that are suitably wary about presenting a curated view of yourself.  As the world becomes ever more digitized and people tracked by a growing myriad of devices it will almost certainly leave fewer and fewer opportunities to decide not to participate.   It’s one thing to curate the view of yourself that is broadcast on social media.  It would seem to me to be quite another to see how much curated content will exist in the world IoT might create.  I think it is vital that the IoT promise is achieved by having an appropriate model of regulation to ensure privacy remains an option.

Images sourced from Shutterstock.
This blog post was previous published on the Business Value Exchange.

Digital Zoom – Part 1

As always December is a good month to find opinions being shared on what 2015 will bring in terms of technology trends.  My good intentions are always to commit my thoughts to writing early in the month.  Typically each year I fail to act and reach the middle of January before sitting down to write.  This year I aim to break the trend!  However, my other firm resolve to get the Christmas cards into the post early has once again proved fruitless.

I think 2014 was the year in which the “drive to a digital world” really gathered pace and became all pervasive. How that digital content is being consumed is key and many analysts are arguing that more time is being spent consuming data via mobile applications than via the web.  A good articulation of this argument has been made by Benedict Evans in his post entitled “Mobile Is Eating The World”.  It seems that the drive to a digital world and mobile devices are completely intertwined.  It is clear that success in 2015 in virtually all business spheres will depend on how adeptly companies continue to adapt their business model and offerings to the digital world.

Digital World - shutterstock_69191626

The expectation that services can be consumed at the total convenience of the customer is now deeply embedded, certainly in the societies of the G20 countries and arguably globally.  That “anytime, anyplace, anywhere” mantra (yes I am old enough to remember the famous Martini advert!) is conditioned I think by the importance of brand recognition, context and trust.  It seems to me that people are becoming slowly more aware of the risks of the digital world, particularly the ability to trust content and to rely on privacy for data and identity.  A Forrester analyst Heidi Shey recently blogged that “Today, about a third of security decision-makers in North America and Europe view privacy as a competitive differentiator.  Forrester expects to see half of enterprises share this sentiment by the end of 2015”.  The detail of the research is behind the Forrester pay-wall but the summary is worth a read.

Clearly to enable the hyper-connected digital world we will need to see the underlying infrastructure continue to evolve at an ever increasing pace.  I think the argument that the digital world is made real through an ever growing population of devices and sensors combining to enable contextual data consumption is right.  A very persuasive summary of this argument was given by Satya Nadella back in March 2014 early in his tenure at Microsoft in his “Mobile first, Cloud first” strategy messaging.  The Internet of Things (IoT) concept will become ever more real and valuable in 2015. It will require underlying cloud based services to enable the collection, collation and presentation back in a value adding form and context.  The rapid proliferation of wearables technology is just one visible sign of the devices landscape that will enable the digital world and realise the IoT promise.  The sheer number of mobile phones (often quoted as being over 7 billion now in use) with the “there is an app for that” assumption is bringing the connected digital world into the consumer mainstream ever more quickly.

We are all now expecting that the different data units required to enable a transaction or consumer experience to take place will be seamlessly collated and enacted.  The initial “wow that is clever” reaction to data being combined to enable something that was once slow and painful to execute will increasingly be replaced by impatience and frustration if it is not so.  I tried to explain to someone the other day how hard it used to be to renew car road tax as opposed to the online seamless checking of the various key components required for validation delivered by the DVLA website. I felt ancient!

So in short I see 2015 as the year where the IoT concept becomes visible to the mainstream.  It will be the year where the difference between a strong digitisation strategy and an average one will translate to material competitive advantage.  It will be the year where brands that demonstrate the quality of their content and deliver a superb customer experience combined with an appropriate contextualised respect for data and identity privacy will win.

All very exciting might be your reaction, but what does that mean for those of us in the technology sector then?

Post was also published on the Business Value Exchange.
Image is via Shutterstock.

Computing an answer to life, the universe and everything

In the 1970s, high performance computing was a major trend but in recent years, it’s fallen into the shadows created by the personal computer and the world wide web. Indeed, for a while it seemed that HPC’s destiny was to provide the basis for the Deep Thought computer in Douglas Adams’ satire, The Hitchhiker’s Guide to the Galaxy (HG2G), which was designed to provide the answer to life, the universe and everything (which we now know to be 42, of course!).

In reality, HPC never went away and technology has been improving because of Fujitsu (and others) innovating and investing (indeed, IBM named one of their Chess-playing computers Deep Thought, in reference to the HG2G computer).

Last week I wrote about the Lyons Electronic Office (LEO) as an important part of Fujitsu’s heritage and I referenced the Fujitsu K supercomputer.  We normally avoid talking about Fujitsu products on this blog but creating the world’s fastest supercomputer is a truly impressive feat of technological engineering – and, besides, in a recent CIO forum I was asked “so why do you bother given only a few of these installations will ever be sold?”.  That’s is a fair question and one to which I think I gave a reasonable answer at the time but it is also one that merits further exposition – hence this post.

What I didn’t say in answer to the CIO who asked me that question was that the K supercomputer has over 22,032 blade servers fitted into 864 server racks delivering 705,024 for parallel computation jobs. It is based on the Fujitsu eight-core “Venus” Sparc64-VIIIfx processor running at 2GHZ delivering 128 gigaflops per chip.  However, I confess that I did say that it has achieved the words best LINPACK benchmark performance of 10.51 petaflops (quadrillion floating point operations per second) with a computer efficient ratio of 93.2%; softening the geeky answer by explaining that the name “K” comes from the Japanese Kanji character “Kei” which means 10 peta (1016) and that in its original sense Kei expresses a large gateway, and it is hoped that the system will be a new gateway to computational science.   More detail on the testing process and league table can be found in the TOP500 project’s presentation.

The importance of the K supercomputer is both what it enables through its staggering computational power and how the technological advances it represents are cascaded through our future product portfolio.  Of course, we’re not the only company that takes supercomputing seriously and IBM Watson is another example, even competing in TV gameshows!

Our digital world is growing exponentially and if we want to enrich it through technology and bring to life the Internet of things then, along with the storage capacity, we need compute power.  As we get closer to using sensors to drive or manage real-time events, we need to deploy faster computational power.  However, that compute power needs to be available at a sensible cost level and market forces are busy at work here in the cloud computing context.

Interpreting the world in real-time has led some to ponder how soon will we have computer processing power to rival that of the human brain.  I’ve seen some articles asserting that 10 petaflops is the processing power of the human brain although I think the general informed consensus is that it is in reality at least 100 petaflops and perhaps a factor ten times higher than that.  IBM have apparently forecast the existence of a brain-rivalling real-time supercomputer by 2020 although how it would be powered and the space required to hold it may limit applicability!  Inspired by the K supercomputer, Gary Marshall asks if technology could make our brains redundant but it’s worth noting that no computer has yet passed the famous Turing Test (i.e. we can still tell the difference between a machine response and a human response).

Advances in supercomputing are bringing new capabilities into the digital world as what was once restricted and unique becomes pervasive technological capability.  Once we are able to connect sensors and actuators with sufficient processing power to enable their connection to be meaningful then we can enrich the quality of human life.  This concept is at the heart of the Fujitsu vision, delivering human-centric intelligent society.

Fujitsu PRIMEHPC FX10I hope I’ve shown how the criticality of the K supercomputer and our drive to commercialise those technological advances through various models including cloud computing.  It lies at the heart of our vision of how technology will continue to evolve to enrich our lives, not just in enabling high performance computation for research simulations but in delivering solutions that will touch our everyday lives, as shown in the Discovery Channel’s video about the K supercomputer.  As for the addressable market, there is a commercial variant of the K, called the PRIMEHPC FX10.

I do hope you will forgive me an atypically Fujitsu-centric post.  The question the CIO asked me was a good one, it made me think how to give context to something I’d come to assume was obvious.

At the head of the post, I mentioned Douglas Adams’ Deep Thought computer… if I think back to the reasons we built K and the type of workloads it will process (medical research, astronomy, physics, etc.), maybe it really is computing an answer to life, the universe and everything.