Digital Magna Carta time?

Recently I seem unable to avoid reading material on security risks associated with the use of technology.  It is certainly a good thing that the topic has a growing profile as that can positively drive upward awareness of the risks.  However, I do worry that many articles only tend to articulate the risks and remain silent on the potential benefits arising from technology enabling our lives.  Writing about the dangerous downsides of how easily Internet of Things (IoT) context devices can be hacked will definitely get attention.  This is fine if we also gain the value of people being more aware and then engaging on an informed basis with technology and related information security risks.

I noticed recently that the New York Stock Exchange (NYSE) had sponsored and circulated a publication called Navigating The Digital Age: The Definitive Cybersecurity Guide (for Directors & Officers) to every NYSE listed company board member.  This was produced in partnership with Palo Alto Networks and a wide and impressive range of contributing writers and organisations.  I found it an excellent read.  What I particularly liked was the recognition clearly conveyed that people as much as technology (or process) are at the heart of both the information security threat and the defences.   The need to educate both the consumers of technology enabled solutions and those operating and defending them was well articulated.

The criticality of all of us being aware of the risks to our data and the steps we can take to mitigate them is becoming clearer to most people.  The publicity around corporate hacks like Sony and the recent press around the cyber “front” in the current challenging situation in the Middle East are hard to avoid.  However, in recent weeks the questions I have been asked most often around information security have been related to stories on many and various IoT devices that have allegedly proved vulnerable to hacking.  People have raised many concerns with me on a wide range of devices from connected car systems to house alarms to healthcare wearables to pacemakers.   I remember reading, but annoyingly cannot now find, an article which used the term “Internet of Nosey Things” in its discussion of the type and value of data involved.

Digital Law - shutterstock_120641284 (2)

Indeed the ISACA 2015 Risk Reward Barometer declared that its 7000+ contributors saw IoT as being the prime area of information security concern.  The survey reported that over 70% of respondents saw a medium to high likelihood of attack via such devices either in the consumer or in corporate context as they become more common in the workplace.  This concern is then compounded by the (ISC)2 Global Information Security Workforce Study 2015  which forecasts that we will simply not have enough security skilled people in the workforce to provide adequate defences.  They see the gap being as many as 1.5 million security workers too few by 2020.

If that forecast proves true then we need to have placed information security at the centre our technology design process.  In fact if you look at the automation and machine to machine implications of IoT then we clearly have to ensure our defences are not operator dependent.  The imperative to automate defences is nicely highlighted by the HP Cyber Security Report 2015.  This is a sobering read of results from interviewing 252 companies in 7 countries.   What particularly stood out in the material is that the time to recover from a cyber-attack has risen from 14 days in 2010 to 46 days in 2015; that the number of successful attacks reported has risen by 46% since 2012; and that the average cost of cybercrime per participating company was $7.7m.

So having started saying I was wary of scare mongering articles on information security I have now drifted towards the negative perspective.  It is quite hard to avoid when considering this topic I fear.  As the benefit delivered by technology is huge and alluring so does it comes risk and as ever some people don’t see a problem with acting illegally to make money.  In that sense this challenge is nothing new and we have a good track record across many societies of working out how to protect ourselves (eventually?!) from such threats.

Magna Carta - shutterstock_287752943 (2)

Perhaps we do indeed need a digital age Magna Carta or its mirror incarnations across the globe.  The content of this updated Magna Carta was built on the input of over 30,000 people having begun as an initiative focused on school children.  The British Library site hosting the debate has lots of other excellent material worth reviewing.  The good news is that the debate is still open as to what this digital age Magna Carta should state. Why don’t you go and place your vote?

Images via Shutterstock,com.

D For Device? Or Data? Or Both?

I think most people would agree that the blurring of the boundary between our working and personal lives is accelerating.  I know from many discussions that some people are more comfortable with that trend than others.  Typically those yet to see their 35th birthday seem to be mostly supportive, those that have gone past that milestone tend to be at best more sceptical at the value proposition.  There are countless case studies on highly successful companies that demonstrate their success is linked in some way to their employees having a personal commitment and deep affinity to the corporate objectives. However, recently I have read a few reports which argued that part of building that alignment can be enabled by removing the distinction between corporate devices and personal devices.  They argue that in some way this step impacts on the psyche of the employees making work more personal and so building a stronger sense of ownership.  Typically the term used in this context is Bring Your Own Device (BYOD) although there is a variant which has proponents where the employee is enabled to select device of choice from a defined catalogue, namely Choose Your Own Device (CYOD).  The latter is intended to mitigate perceived risks associated with the operating model of BYOD and its implicit wide range of device options.

BYOD Figure shutterstock_156558971

I must confess to have some doubts on the impact ascribed to BYOD in the context of employee empowerment.  I certainly accept that it can reduce operating expenses and decrease the level of corporate investment in enabling technology.  I have been involved in defining and deploying a BYOD strategy twice thus far in my career.  I can point to the financial benefits arising from trading convenient access to selected corporate data stores from employee’s smartphone for the cost of providing a corporate variant.  However, the more I talked to CIOs who have deployed BYOD schemes and some of their highly enthused employees I have heard the empowerment message coming through loud and clear.  A very confident “millennial” enthusiast for BYOD pointed out to me that she saw her smartphone and tablet as being in many ways an extension of her personality.  The growth of highly personalised wearable devices which often have a key link to the smartphone of choice is only going to make this blurred boundary more challenging.   It seems likely that intelligent watch is going to become mainstream particularly now with the arrival of the Apple Watch.  People are not likely to distinguish between their personal and corporate watch.  They will want the benefits from their device of choice in the workplace both in personal and corporate terms.

However, accepting that engagement can be driven upward by a BYOD scheme it is very clear that the most important “D” in that context is not the “device” but rather the “data”.
Information assurance and how the corporate data set is protected is undoubtedly the key to unlock BYOD deployment and the promise of more engaged, committed and enabled employees.  Data Security shutterstock_104783210 (2)If you cannot securely manage access to the corporate data employees need or want or both to access from their own device then the scope of the BYOD deployment is going to be constrained and most likely disappoint the user community.  We can all identify sectors where this constraint is in place.  Indeed it is clearly shown when you look at BYOD adoption by industry sector analysis that there are sectors where there are specific restrictions driven by information assurance policies.

I recently read (in a Forrester report I think) that by 2017 over 50% of private sector organizations will no longer provide devices to their employees.  This same report highlighted that the majority of IT decision makers believe they would be at a competitive disadvantage if they do not embrace BYOD.   A quick look via the internet search engine of your choice will provide a great deal of material on how to define and deploy a BYOD policy.

There are some great case studies available from the early adopters with  interesting insights including one that stuck in my memory of a company whose network performance was crippled as the BYOD was so successful and their policy did not limit the number of devices each employee could bring to the party.   The vast majority of what I have read focuses on the criticality of managing access to the corporate data and so the associated risk.  So you have the classic compromise situation whereby the drive from employees for an expansive BYOD deployment needs to be balanced with a securely managed data access model.  If these two aspects can be balanced then there is undoubtedly huge value in what can be derived from embracing BYOD.  Indeed many would argue that approaching corporate IT from the “IT consumerisation” user perspective can lead to valuable innovation of the corporate data security model.  A good case for  this line of argument is made by Stacey Leidwinger in her blog post entitled “Embracing Employee Empowerment“.

At the heart of this debate are what might be termed two absolute truths.  Employees that are frustrated and thwarted by restrictive technology will generally find a way around those obstacles or at the very least introduce risk by trying to do so.  At the same time in the digital age it is clear that security of corporate data must not constrain user enablement.    I think it is well recognised today that King Canute like IT departments that attempt to resist the oncoming tide of end user expectations are going to find themselves drowning under a wave of “Shadow IT” challenges.  They may well find that crucially in so doing they have driven a range of key business risks subterranean too.

Part of this post has previously been published on the Business Value Exchange.
Images courtesy of Shutterstock.

Just Connect?

Is 2015 the year in which the much discussed Internet of Things (IoT) is becoming mainstream?  I was prompted to muse on this question by watching a friend remotely check and then reset the temperature of his home via their smartphone from our restaurant table.  Also that same evening saw me extolling the benefits of my health wearable device and demonstrating how to review my statistics via an app on my smartphone.  This is certainly different from the initial smart sensors on goods and within warehouses that help track stock levels and triggered replenishment orders.  My first encounter with IoT was in the smart meter space in the energy sector.  This is where meters enhanced with sensors are deployed to enable the providers to remotely monitor energy usage real-time and use that feedback to optimise their delivery model.

IoT 2 - shutterstock_254834209 (2)Indeed defining the term IoT can be problematic.  I like this definition from a McKinsey article, that it is “the networking of physical objects through the use of embedded sensors, actuators, and other devices that can collect or transmit information about the objects. The data amassed from these devices can then be analysed to optimize products, services, and operations”.   In 2011 when IoT first hit my radar I remember many articles from analysts predicting that by 2020 the market for connected devices would have reached somewhere between 50 billion and 100 billion units.  Generally analysts today seem to be talking about a reduced but still material 20 billion or 30 billion units by that date.

To enable that scale to be reached we need to look beyond the “Things” and indeed even the connectivity aspect.  Ultimately the old mantra of “it is all about the data” is at the heart of the key ingredients required.  It is not just about getting the data to a store in the cloud.  It is about doing so in a way that reflects the information privacy and security dimension within a framework of enabling technology standards.  I don’t think we will realise the promise if we end up with an IoT that is more the “Internet of Proprietary Things”.

I picked up on the proprietary angle in an article by Matt Honan in the magazine Wired:  “Apple is building a world in which there is a computer in your every interaction, waking and sleeping.  A computer in your pocket.  A computer on your body.  A computer paying for all your purchases.  A computer opening your hotel room door.  A computer monitoring your movements as you walk through the mall.   A computer watching you sleep.   A computer controlling the devices in your home.  A computer that tells you where you parked.  A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned – and sharing it all with your friends…. The ecosystem may be lush, but it will be, by design, limited.  Call it the Internet of Proprietary Things.”

Many see a darker side to the IoT vision.   They see a world where you are constantly tracked, monitored and the data about you monetised without your permission on a massive scale.  Indeed some go as far as seeing the IoT as enabling a far more effective and efficient surveillance by the state, yet with the added twist that we seem to be volunteering to have it.

Cyber Surveillance - shutterstock_95308294 (2)

The threat seen is that we end up being monitoring by every device in our lives from our cars, to our household white goods, to a massive range of smartphone or wearable type apps and to the more understood spend trail we leave with credit and debit cards.  This set of data points will then be correlated, analysed and without the relevant protections on privacy sold on to businesses without you being explicitly aware and agreeing.

There are a number of articles around that counter this point by making a link from IoT in this regard to social media.  I think the point they miss in doing so is that social media is for those that are suitably wary about presenting a curated view of yourself.  As the world becomes ever more digitized and people tracked by a growing myriad of devices it will almost certainly leave fewer and fewer opportunities to decide not to participate.   It’s one thing to curate the view of yourself that is broadcast on social media.  It would seem to me to be quite another to see how much curated content will exist in the world IoT might create.  I think it is vital that the IoT promise is achieved by having an appropriate model of regulation to ensure privacy remains an option.

Images sourced from Shutterstock.
This blog post was previous published on the Business Value Exchange.

Digital Zoom – Part 1

As always December is a good month to find opinions being shared on what 2015 will bring in terms of technology trends.  My good intentions are always to commit my thoughts to writing early in the month.  Typically each year I fail to act and reach the middle of January before sitting down to write.  This year I aim to break the trend!  However, my other firm resolve to get the Christmas cards into the post early has once again proved fruitless.

I think 2014 was the year in which the “drive to a digital world” really gathered pace and became all pervasive. How that digital content is being consumed is key and many analysts are arguing that more time is being spent consuming data via mobile applications than via the web.  A good articulation of this argument has been made by Benedict Evans in his post entitled “Mobile Is Eating The World”.  It seems that the drive to a digital world and mobile devices are completely intertwined.  It is clear that success in 2015 in virtually all business spheres will depend on how adeptly companies continue to adapt their business model and offerings to the digital world.

Digital World - shutterstock_69191626

The expectation that services can be consumed at the total convenience of the customer is now deeply embedded, certainly in the societies of the G20 countries and arguably globally.  That “anytime, anyplace, anywhere” mantra (yes I am old enough to remember the famous Martini advert!) is conditioned I think by the importance of brand recognition, context and trust.  It seems to me that people are becoming slowly more aware of the risks of the digital world, particularly the ability to trust content and to rely on privacy for data and identity.  A Forrester analyst Heidi Shey recently blogged that “Today, about a third of security decision-makers in North America and Europe view privacy as a competitive differentiator.  Forrester expects to see half of enterprises share this sentiment by the end of 2015”.  The detail of the research is behind the Forrester pay-wall but the summary is worth a read.

Clearly to enable the hyper-connected digital world we will need to see the underlying infrastructure continue to evolve at an ever increasing pace.  I think the argument that the digital world is made real through an ever growing population of devices and sensors combining to enable contextual data consumption is right.  A very persuasive summary of this argument was given by Satya Nadella back in March 2014 early in his tenure at Microsoft in his “Mobile first, Cloud first” strategy messaging.  The Internet of Things (IoT) concept will become ever more real and valuable in 2015. It will require underlying cloud based services to enable the collection, collation and presentation back in a value adding form and context.  The rapid proliferation of wearables technology is just one visible sign of the devices landscape that will enable the digital world and realise the IoT promise.  The sheer number of mobile phones (often quoted as being over 7 billion now in use) with the “there is an app for that” assumption is bringing the connected digital world into the consumer mainstream ever more quickly.

We are all now expecting that the different data units required to enable a transaction or consumer experience to take place will be seamlessly collated and enacted.  The initial “wow that is clever” reaction to data being combined to enable something that was once slow and painful to execute will increasingly be replaced by impatience and frustration if it is not so.  I tried to explain to someone the other day how hard it used to be to renew car road tax as opposed to the online seamless checking of the various key components required for validation delivered by the DVLA website. I felt ancient!

So in short I see 2015 as the year where the IoT concept becomes visible to the mainstream.  It will be the year where the difference between a strong digitisation strategy and an average one will translate to material competitive advantage.  It will be the year where brands that demonstrate the quality of their content and deliver a superb customer experience combined with an appropriate contextualised respect for data and identity privacy will win.

All very exciting might be your reaction, but what does that mean for those of us in the technology sector then?

Post was also published on the Business Value Exchange.
Image is via Shutterstock.

Computing an answer to life, the universe and everything

In the 1970s, high performance computing was a major trend but in recent years, it’s fallen into the shadows created by the personal computer and the world wide web. Indeed, for a while it seemed that HPC’s destiny was to provide the basis for the Deep Thought computer in Douglas Adams’ satire, The Hitchhiker’s Guide to the Galaxy (HG2G), which was designed to provide the answer to life, the universe and everything (which we now know to be 42, of course!).

In reality, HPC never went away and technology has been improving because of Fujitsu (and others) innovating and investing (indeed, IBM named one of their Chess-playing computers Deep Thought, in reference to the HG2G computer).

Last week I wrote about the Lyons Electronic Office (LEO) as an important part of Fujitsu’s heritage and I referenced the Fujitsu K supercomputer.  We normally avoid talking about Fujitsu products on this blog but creating the world’s fastest supercomputer is a truly impressive feat of technological engineering – and, besides, in a recent CIO forum I was asked “so why do you bother given only a few of these installations will ever be sold?”.  That’s is a fair question and one to which I think I gave a reasonable answer at the time but it is also one that merits further exposition – hence this post.

What I didn’t say in answer to the CIO who asked me that question was that the K supercomputer has over 22,032 blade servers fitted into 864 server racks delivering 705,024 for parallel computation jobs. It is based on the Fujitsu eight-core “Venus” Sparc64-VIIIfx processor running at 2GHZ delivering 128 gigaflops per chip.  However, I confess that I did say that it has achieved the words best LINPACK benchmark performance of 10.51 petaflops (quadrillion floating point operations per second) with a computer efficient ratio of 93.2%; softening the geeky answer by explaining that the name “K” comes from the Japanese Kanji character “Kei” which means 10 peta (1016) and that in its original sense Kei expresses a large gateway, and it is hoped that the system will be a new gateway to computational science.   More detail on the testing process and league table can be found in the TOP500 project’s presentation.

The importance of the K supercomputer is both what it enables through its staggering computational power and how the technological advances it represents are cascaded through our future product portfolio.  Of course, we’re not the only company that takes supercomputing seriously and IBM Watson is another example, even competing in TV gameshows!

Our digital world is growing exponentially and if we want to enrich it through technology and bring to life the Internet of things then, along with the storage capacity, we need compute power.  As we get closer to using sensors to drive or manage real-time events, we need to deploy faster computational power.  However, that compute power needs to be available at a sensible cost level and market forces are busy at work here in the cloud computing context.

Interpreting the world in real-time has led some to ponder how soon will we have computer processing power to rival that of the human brain.  I’ve seen some articles asserting that 10 petaflops is the processing power of the human brain although I think the general informed consensus is that it is in reality at least 100 petaflops and perhaps a factor ten times higher than that.  IBM have apparently forecast the existence of a brain-rivalling real-time supercomputer by 2020 although how it would be powered and the space required to hold it may limit applicability!  Inspired by the K supercomputer, Gary Marshall asks if technology could make our brains redundant but it’s worth noting that no computer has yet passed the famous Turing Test (i.e. we can still tell the difference between a machine response and a human response).

Advances in supercomputing are bringing new capabilities into the digital world as what was once restricted and unique becomes pervasive technological capability.  Once we are able to connect sensors and actuators with sufficient processing power to enable their connection to be meaningful then we can enrich the quality of human life.  This concept is at the heart of the Fujitsu vision, delivering human-centric intelligent society.

Fujitsu PRIMEHPC FX10I hope I’ve shown how the criticality of the K supercomputer and our drive to commercialise those technological advances through various models including cloud computing.  It lies at the heart of our vision of how technology will continue to evolve to enrich our lives, not just in enabling high performance computation for research simulations but in delivering solutions that will touch our everyday lives, as shown in the Discovery Channel’s video about the K supercomputer.  As for the addressable market, there is a commercial variant of the K, called the PRIMEHPC FX10.

I do hope you will forgive me an atypically Fujitsu-centric post.  The question the CIO asked me was a good one, it made me think how to give context to something I’d come to assume was obvious.

At the head of the post, I mentioned Douglas Adams’ Deep Thought computer… if I think back to the reasons we built K and the type of workloads it will process (medical research, astronomy, physics, etc.), maybe it really is computing an answer to life, the universe and everything.

Some thoughts on the “Internet of Things”

The “Internet of Things” has become a commonly used phrase and I think it’s quite a good on: we’ve some idea what the “Things” are but no idea where it will lead (although Hollywood has tried a few times over the years). One thing we can not do is dissolve ourselves of the management responsibility as there will always need to be humans somewhere in the system to avoid the “Skynet” scenario from the Terminator films.

More positively, the Internet of Things has the potential to make the digital world a very pervasive aspect of our daily lives in the physical world, supporting and enhancing many of the positive aspects of society and the aspirations we have for living together.

Eventually, people will have as many sensors as a Formula One racing cars (well, quite a few anyway!), sending lots of data in to the cloud.  Not quite as wired up as the people involved with the measured life movement but they are leading the charge. At some point out human-centric devices will become patches (electronic tattoos), powered by energy harvested from our bodies (thermal or kinetic) and that’s when things get really exciting. We can expect to see mobile phones being used as a proxy device to pass telemetry to the Cloud. You can see why the Health industry wants this technology (although, by then we’re not talking about health but “well being”) and we might need far fewer trips to visit a doctor as a result.

Intelligent Device Hierarchy Potential, by Harbor ResearchNow with the number of “things” feeding the Internet, the potential to manage and change the way we do things is an exciting prospect – and we’re not just looking at health – examples include energy management, traffic management, alert and monitoring systems – the list goes on.

One example reaching commercial introduction is the Boeing-Fujitsu partnership with RFID tags, where the tags contain the service history of a component and, using hand held scanners, maintenance staff can determine which parts need to be serviced – how long before this can be managed in-flight, with the parts waiting on the ground ready to intercept the aircraft on a just-in time basis?

Another aspect of the Internet of things will be our ability to make smart decisions based on the large volumes of data we will have to hand. This “big data”, along with associated analytics tools, can be used to spot patterns with examples including traffic, energy consumption and weather. Imagine a world of connected systems where the weatherman might not only predict the “Barbeque Summer” with a little more accuracy, but we won’t get stuck in traffic as we all rush to the beach!

Image credit: Harbor Research

Big data – getting on the front foot

With cloud now part of everyday language, the next big thing is Big Data. Essentially it is the recognition that the digital world is generating increasing volumes of data (according to Cisco, humans created more data in 2009 than in all previous years combined), most of which no one is doing anything with, except storing it. The challenge articulated by the big data concept is effective mining and analysis the data to create value and wealth. By way of an example, The Big Mac Index brings together a set of data that can give an indication of the relative wealth of a country but how and when it is applied is the key.

Titling this post “Big Data – getting on the front foot” refers to a balance with human intuition; we often make a decision based on a small set of knowledge and information only to be second-guessed later with facts and figures that indicate whether our decision was correct (or not). For me the execution of big data is to put the right information, data and knowledge in to the hands of the decision maker at the point they need it, not at some point post-decision. What does this mean for you and me? Well, healthcare professionals, retailers, financial services providers, government or just about anyone that we interact with in a social or business context will have immense amounts of information about us and our relative positions in teams of health, wealth, buying habits, risk for insurance purposes etc. – let’s hope that the decisions they make, based on that data, are the correct ones!

Fujitsu’s vision of a Human Centric Intelligent Society highlights all the positive aspects of this digital society with the “Internet of Things” playing a pervasive role. But is the World going to be so different as a result or will it spin just a bit faster? If we take our health and well being as an example, there is a logical chain events that lead to a general improvement. By using a simple logical sequence of mapping the human genome, understanding the variation from what is expected, how we live and the environment we live in, we can potentially be offered very precise and evidence-based advice on how to avoid certain illnesses. Add the ability to model potential drugs in the digital world against the human gnome including demographic variances and the potential outcome has a huge value to society. The research and development costs of drugs drop considerably as potential failures are weeded out very early in the development cycle and, using big data, a doctor can map the best drug to a condition you have based on your gnome.

It all sounds great but there are some challenges along the way:

  • McKinsey indicate that big data will bring lots of new jobs; however it’s my hypothesis that these are really the same jobs carried out differently.
  • Some of the bastions of our society (particularly in the west) will need to change. For example, insurance companies will need to take a different view on their risk-based business model (otherwise we will all be uninsurable!).
  • We’ll need to take a different approach to security too – look at how the “Facebook generation” views sharing and what they care about.

In short – we will all have to behave differently in the world of Big Data. After all, it’s not just a big social network where everyone is your friend!

A look back on 2010, and a view forward to 2011

New Year 2011 pushing 2010 downAt this time of year there are many articles and posts that provide insightful, amusing and thought provoking summaries of the year nearly completed.  The fact that I have read a number of excellent reviews of 2010 has helpfully discouraged me from trying to compete.  That said I cannot resist some personal observations on how I experienced 2010.

2010 was the year where we went from talking about the potential of cloud computing to seeing that future state take shape in the market.  Regardless of your position on solution maturity I doubt many would argue that cloud computing has not arrived and has had no impact on corporate IT strategy.  However, it is not cloud computing that stands as my key inflection point in 2010; that is reserved for the moment that the penny dropped for me on the closely related force of IT consumerisation.

My moment of clarity arrived during Fujitsu’s VISIT 2010 event held in Munich during November.  I’d just walked around the exhibition hall with three client CIOs and moved through the whole range of Fujitsu activities from our endpoint products to our server and storage technologies to cloud computing offerings, our extensive partner ecosystem, right through to our research activities under the strategic intent of human centric computing to enable an intelligent networked society.

I was asked by one of the CIOs which of the areas we’d just seen were having the most impact on my internal IT strategy; after some thought and the sound of a penny dropping I replied none of those as such but rather the change in expectations of my IT delivery.   Two of the CIOs looked at me as if I were slightly deranged whilst (luckily!) the third nodded and agreed with me.  Over a coffee we convinced ourselves that the key challenge is not technology aspects such as device proliferation, or the shadow IT landscape funded by credit cards, nor even social media finding a way into the enterprise.  We decided that the key disruptor is actually the one of expecting choice and an increasing demand to apply the market dynamics of the consumer marketplace to the corporate world.  This brings with it an expectation that using corporate IT should be “pleasurable”, “exciting”, “immediate” and dare I say “cool”; a customer experience as opposed to user experience.

At the start of the year many people including me were using the term “Generation Y” to encapsulate a set of behaviours and expectations that we asserted were generational.  Today I still argue that the characteristics attributed to Generation Y exist but now believe that that many of them are not restricted to a given generation.   Indeed if I look at my weekly barometer of demand (see my earlier post about IT consumerisation) I know enough of the names in my mailbox demanding iPad connectivity, Android access to corporate systems, adoption of services like DropBox, access to social media sites to know that the majority are actually Baby Boomers or Generation X.

The tension created the moment you attempt to reflect consumer arena expectations and demands in your corporate IT strategy is perplexing.  You rapidly find yourself becoming at best the voice of caution, at worst the voice listing all the reasons why not, despite the benefit that could accrue to the organisation.  Balancing risk against benefit is a key part of the CIO role but unsurprisingly I find the role much more rewarding when able to operate as the Chief Innovation Officer.  There is a strong temptation in the face of escalating demand for which you lack funding, quite apart from the information assurance implications or indeed those relating to the operational cost management, to simply say “no, because” and forget all of your consultative customer centric training in how to respond to challenging demands!

I think 2011 is going to be a challenging year for CIOs as I don’t think the economic climate has suppressed the demand for technology solutions arising from the consumer sector centric expectations.  Those of us fortunate to be in CIO roles are certainly not going to be bored. I say fortunate as with those challenges come change and if we don’t like change then IT is the wrong career choice!  So have a good rest over the festive period and recharge those batteries – 2011 is going to be interesting.

Image credit: © VBar –

Sustainability is about more than just green IT

This morning I took part in a panel at an event within an initiative entitled the SMART Series which Fujitsu Ireland jointly sponsored with the Dublin Chamber of Commerce.  The focus of the initiative is the SMART economy strategy that the Irish government has published as part of its response to the current economic challenges, the event being titled “Ireland & The Green Economy”.  The opportunity had partly arisen as a result of a week I spent in Tokyo with Fujitsu Laboratories in September where their two key research themes of “Human Centric Computing” and “Intelligent Society” had within them a number highly relevant to the aim of enabling a sustainable “smart economy”.

I wanted my contribution to convey to the audience that I feel passionately that Fujitsu Group can make to help build genuinely real societal benefits around sustainability and building a “smart green economy”.  As part of building Fujitsu’s credentials as the leading Japanese technology company I also talked about the impressive progress that Japan has made as a country in the sustainability arena, which is very impressive.  In an IDC report published last year Japan were the only country rated “tier 1” on the IDC ICT sustainability index being “able to use ICT to reduce emissions more effectively than any other country” (the United States, United Kingdom and Germany were listed as tier 2).  I believe the IDC report is annual and due out in November so I’ll be returning to this topic I’m sure.  An interesting statistic shared with me by one of the CEOs attending the event in Dublin was that between 1998 and 2003 Japan accounted for 40% of all the green related patents registered and was the most active in 12 out of 13 fields tracked in a study by CERNA and OECD.

The keynote speaker was John Shine, Deputy CEO for the Irish Electricity Supply Board (ESB) and I thought he did a really excellent job in clearly making the case for the part information technology can and has to play in enabling the innovative energy management strategies essential to building and operating a “smart green economy”.  What resonated for me was how the energy management solutions his company are deploying in Ireland assume enablement by information technology for the required capability rather than for technology’s own sake.  This is critical I think to how we need to embed the concepts of sustainability into our corporate information technology provision.  The focus must increasingly be on ensuring the business outcome and the “green credentials” of the solution must be implicit, expected and delivered.  Of course we all need to focus on driving the “Green IT” agenda by optimising the carbon footprint of our data centres, using hardware that consumes zero watts in sleep mode, providing communication solutions that minimise travel, etc.  It is part of our role to enable our companies to achieve the carbon management targets relevant to our sector/sphere of delivery but these are often only a means to an end, not necessarily the end itself.

The really interesting area is how we can use ICT to build smarter cities (to borrow an IBM term) in which technology enables the society to behave in a more intelligent, sustainable way. The concept of the “Internet of Things” comes alive here for me with sensors embedded in a massive range of devices with the ability to communicate data back to processing centres and then receive instructions on how to react automatically to be more efficient and optimise emissions.  The McKinsey Quarterly had an interesting article earlier in the year where they explored this topic and discussed a range of implementations including “sensors on patients that help physicians modify treatments rapidly: sensors in vehicles that help insurers set prices and driver avoid accidents: sensors in factories and data centres that automatically adjust operations.”

At the event we had some good discussions around “smart grid” in the energy sector and how the concept of smart metering had huge potential to enable our sustainability agenda. It was good to relate the activities of Fujitsu in this area in specific to the audience and I’m hopeful that some opportunities to deploy our solutions in Ireland may have been triggered by the debate.  That’s the joy of these events – you are not there to overtly promote you company, except that really you are there to promote your company!

What I hope I conveyed to the audience was that ICT can play a bigger role than is typically intended by labels like “Green IT”. I’m not knocking in any way the contribution that can be made in that area; however, the combination of sensors, communication solutions, rapidly evolving analytic techniques and the flexibility/elasticity of the cloud computing promise can all combine to make a broad contribution to a “smart economy” being a “smart world”.  It is critical that those of us in the ICT sector do not lose sight of the sheer power of the contribution we can collectively deliver.

Translating innovation potential to business benefit

Spending a week with the researchers of Fujitsu Laboratories in Tokyo certainly provides food for thought.  It was not just the technological inventiveness and potential that we discussed but it was arguably the far more challenging issue of how we take that innovation and translate it into business benefit for our clients and wider society.   I am here with a party of nine colleagues from Fujitsu UK and Ireland who are the technology leads for our operating divisions and our market offering portfolio capability delivery units.  The overt intent of the trip is mutual education; we articulate the challenges and opportunities we see in our market and client accounts and our Japanese colleagues share their strategic thinking and the quality of their collective intellect.

My primary objective was related to, but subtly varying from, the declared agenda – I initiated the event to facilitate both sides of the equation recognising the urgency of our directly linking the innovative thinking and research to our clients and to jointly take the step of collaborating to align an idea with a need, i.e. open innovation. In my role as Chief Information and Technology Officer for Fujitsu UK and Ireland I see both sides of that innovation coin every day: as CIO I see the operational delivery challenges and the opportunities to obtain business benefit waiting to be solved; as CTO I see the potential as technology evolves bringing new capabilities into reality.  What I wanted to achieve from immersing my colleagues in the “art of what might be possible” was their recognising the importance of what they could provide, the business challenge that needs to be solved.

On the final day of the workshop we spent three hours collating all the interesting research detail into groupings that we could relate to the current challenges each business division either already had as a live issue or were new opportunities to create business value sparked in people’s minds.  We were helped in this mapping process by the Fujitsu Laboratories activities all being codified as supporting the two key top level research themes, human centric computing and the use of technology to create an intelligent society.  Of course the real challenge was in planning how we could take the potential and make it vibrant and compelling to our clients and sales leads whilst also retaining the sense of urgency that ensure business value would be derived sooner rather than later. Managing the time horizons of rigorous and thorough technology development alongside the intense demand for innovative solutions to deliver business benefit in the short term is ultimately the challenge CIOs and business leaders need to balance to attain benefit at acceptable risk and cost.

As you would expect there were many debates held over drinks late into the evening and one topic which related to innovation arose consistently; how could we unleash the creativity of our employees to provide insight on innovation opportunities?  Eventually I came to realise (or was clearly told, you take your pick!) that having handed out objectives to each member of the team for the trip it seemed right and proper that I accepted one of my own!  So I agreed to deploy a social media platform that would allow us to operate a “power of the crowd” event where we would set a defined set of topics on which over a defined time period we would invite both input and people to vote for the suggestions they thought most compelling and worthy of pursuit.  Of course this is a variant of open innovation that many companies are already successfully operating either internally, in closed communities or publically to great effect but it is new for us internally (we have used this technique with the public to defined/refined equipment for a number of years). We will run the event during October and I will share how it went, warts and all, at some point in November.