Dinosaurs don’t dance, or why storage will not be the light at the end of the tunnel for networks

Last week I took part in an Oxford-style debate as part of the Australian Energy Storage conference, where myself and two industry colleagues argued that energy storage would NOT be the light at the end of the tunnel for electricity network operators.

This post is a summary of our arguments, which we presented with success to claim victory in the debate.

Sadly for electricity network operators, dinosaurs don't dance (Photo: https://www.flickr.com/photos/jasminel/8140833334/)

Sadly for electricity network operators, dinosaurs don’t dance (Photo: Uncommon Jasmine)

Our overarching argument was “DINOSAURS DON’T DANCE“, which we evidenced as follows:


  • FACT – network business model matured in the first half of the 20th century (= before TV)
  • PRECEDENT – telco, airline businesses are unrecognizable from before deregulation in the ‘70s
  • FACT – transfer of capital expenditure/benefits realisation to customer runs completely counter to network business model/culture
  • FACT – Australian governments currently own around 75% of electricity network assets in the NEM, and a greater share for Aust as a whole
  • FACT – in 2008/9, R&D undertaken by electricity networks across Aust was less than 1% of the value added, or about the same as the log sawmilling and timber dressing industry


  • FACT – networks are unable to capture all the benefits and are unable to collaborate
  • FACT – network incentives will have to go via the retailers, who own the customer relationship
  • FACT – continued shift to self-generation will most heavily impact networks
  • FACT – both the networks and the AER base their revenue-cap calculations on projections of future demand, which draw heavily on historical data at the expense of future technologies such as storage (which may reduce demand forecasts)
  • FACT – networks are incentivized to avoid risk and by extension new technology
  • FACT – RWE/Germany business model had to break before they adopted the “Future Utility” model, which is less profitable than the old model
  • QUOTE – “rate-of-return regulations … create incentives for inefficiency by encouraging cost padding”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “If the regulator cannot obtain sufficiently reliable information on a business’s costs … it may be possible for the business to game the regulatory process by presenting information that leads to a high revenue allowance”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “using past information to set future targets reduces the incentives of a firm to lower costs since it knows that it will decrease its revenue in the future”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “Regulators do not have complete information about businesses’ actual costs, expenditures, demand and service quality, but they need to make judgments about what the ‘efficient’ cost might be and how long it should take a business to close any efficiency gap”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “Regulators therefore face a trade-off in trying to create incentives for utilities to behave efficiently, while ensuring that customers share in benefits from efficiency gains”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “if a network operates in a low risk way, and as a result, they can access lower cost financing, they can keep the difference between the actual WACC and the regulatory WACC”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • QUOTE – “When a business is faced with a choice between network investment and a DSP project and both have the same potential for earned returns, the business is likely to go with the “easier” network investment option”, AEMC Power of Choice, Dec 2014


  • FACT – benefits are fragmented across networks, retailers and customers
  • FACT – networks have no direct relationship with the customer (what about commercial/industrial?)
  • FACT – customers have a low level of awareness of their network operator (looking for the stat, but I heard it was less than 10%)
  • FACT – electricity providers are trusted by only 54% of households in Australia compared to the global average of 68% (CSIRO, July 2013)
  • FACT – financials for customer implementation will tip at the same time as for networks (particularly in the absence of FiTs/introd’n of connection fees)
  • FACT – Ergon Energy, one of the networks most affected by solar PV adoption, have submitted only two applications for exemption from ring-fencing against ownership of solar assets during the current 5 year regulatory period, and one of these was for their own offices
  • FACT – Vector experience proves that storage implement results in reduced revenue/kWh; Vector have yet to face up to any competition, and once this happens they’ll evolve into a completely different business from a network operator
  • QUOTE – “any individual business user has relatively little capacity to negotiate from a position of power with network businesses”, Productivity Commission inquiry into Electricity Network Regulatory Frameworks, 2013
  • FACT – the Distribution Annual Planning review process requires networks to have regard for Non-Network Alternatives, including those proposed by non-network providers
  • QUOTE – “Capital investment and technology is now flowing downstream into the customer installations”, Ian McLeod, CEO – Ergon Energy, Annual Report 2012-13


  • FACT – networks inability to force change in compensation for PV uptake will require them to downsize and become customer solution enablers
  • QUOTE – “When investors realize that a business model has been stung by systemic disruptive forces, they likely will retreat”, Edison Electric Institute, January 2013
  • FACT – from 2008 to 2013, the top 20 EU utilities lost half a trillion euros from their share value
  • QUOTE – “In their current state, utilities cannot finance Europe’s hoped-for clean-energy system”, The Economist, 12th October 2013
Your author accompanied by other members of the Aust Storage Conf 2014 debate (Photo: EcoGeneration)

Your author accompanied by other members of the Aust Storage Conf 2014 debate


My oDesk experience, and what it means for the future of work

This post describes my recent foray into the world of outsourcing, where I contracted a Filipino to transcribe 3.5 hours worth of audio recordings via the online employment marketplace oDesk.  In what follows I’ll describe the actual experience, and my thoughts on it in the context of how technology is driving change in the world of work.


I had a series of interviews I’d conducted as part of my microgrids research project that needed to be transcribed for ease of use.  I was struggling to find the time to get it done, particularly when I had so many “higher value” uses (do these include writing this blog entry?… I’ll let you be the judge).

My first impressions of oDesk were slightly off-putting.  The number of “freelancers” (= workers) and jobs listed on the site was overwhelming, as was the spectrum of tasks/skills, costs/pay-rates and experience/work history.  The information available on individual freelancers was extensive – star-ratings based upon customer reviews, verbatim reviews from individual customers, hours of work logged through the site.  It felt a little like a horse-racing form guide, where I could get lost down rabbit-holes of statistics and subjective reviews that ultimately I’d have to cast aside in order to place my bet.

Web reviews of oDesk made things worse, with endless complaints from disgruntled freelancers separated by the occasional tale of non-delivery by an unhappy customer.

To compare and contrast, I sourced a quote from an Australian-based transcription service.  Although somewhat confusing due to the number of variables involved, it looked like I was going to be up for a minimum of $2/minute.  Without knowing how long it would take for someone to transcribe the interviews, I guessed that for 3.5 hrs=200 minutes of recordings the cost was going to be more than $400 – way above my budget.

In parallel I put the question to my 600+ LinkedIn contacts as to whether anyone had any experience with outsourcing of this type.  The one response I got from a credible source referred me back to oDesk, so despite my initial reservations I decided to give it a shot.

The first decision I made was to list the job with a fixed-rate.  Not only did this reflect my budget, but it allowed me to get over the pay-rate dilemma.

In the negative web reviews by freelancers, the most common theme seemed to be (unacceptably) low pay-rates – albeit by western standards/reviewers.  I decided to put a fixed price of $100 on my job, which seemed well above the minimum pay-rates and a reflection of what I’d be prepared to pay to avoid doing the work myself.  I reasoned that although this was clearly lower than what an Australian agency/worker would get for the work, it was decent money for someone in a developing nation compared to the bottom-end pay-rates of $5/hr and less being quoted on the site.

I created my job posting which went as follows:

Two audio interviews totalling 3.5 hrs for transcription as follows:
* audio quality good1. NYSERDA
* 1hr 30 min duration
* audio quality good

* three speakers (interviewer + two interviewees)
* 2hr duration
* audio quality excellent
* seven speakers (interviewer + six interviewees)
– doesn’t have to be word perfect; some technical content/terminology – incorrect transcription can be tolerated
– occasional time-stamps/markers would be good
– happy for the job to fit in around your schedule, but finish by mid-May (can be negotiated)
– relatively large filesizes, so would prefer to share files via Dropbox or similar

I then decided to manage my risk by targeting the job directly at highly-rated freelancers with a decent work history.  My main concern related to the mucking around that would ensue in the event of non-performance – my wife had already pointed out that I could have probably done the job myself by now (a typically concise and astute observation).

To find a pool of freelancers I narrowed my search to those listing pay-rates of between $5 and $20 per hour – this still left me with more than 5 pages of search results, so no problems there.  I then sent out direct offers to a sh0rtlist of six freelancers based upon my criteria above, in response to which I had an immediate decline and an expression of interest two days later.

While the job is still underway, so far so good.  The freelancer has been nothing if not professional and responsive, and took on the job following an initial review of the audio recording quality.  His main concern was my Australian accent, but he was quickly reassured by my willingness to wave any errors off (as those bits aren’t what I’m interested in).

I’ll update this post once the job is complete, but at this point it seems like a win-win.


As described in an earlier post, this experience is an example of how technology is creating economic benefits by more efficiently linking buyers and sellers (= ‘gains from trade‘).  Our respective computing/internet/file-sharing access combined with the oDesk platform have allowed me to outsource work at a lower cost than it would take me to do, to someone for whom the agreed payment makes it worth doing.

According to oDesk, they have enabled more than $1 billion worth of work through their online marketplace since it was founded in 2005, and now form part of a global part-time work market worth $422 billion.

What this also means is that professional service workers in developed nations are under increasing pressure to identify and maintain a competitive advantage in the global marketplace.

As symbolised by the negative reviews from disgruntled oDesk workers above, pay-rates for work that can be done in developing nations are lower than what those in developed nations can or will consider (I’ll avoid passing judgement on which).  The interesting thing about the oDesk environment is that workers can see this firsthand (unlike their counterparts in the increasingly-vanquished manufacturing sector).

For workers in developed nations, I’ve some thoughts about competitive advantage.

Firstly, education and training are clear sources of advantage.  Investments in education and training by governments, business and individuals, have never been more critical given the ever-decreasing barriers to trade in jobs and income.  Skills and knowledge are critical to establishing and maintaining a competitive advantage in an increasingly competitive market – the day you stop learning is the day you die.

Secondly, access to sellers is a potential source of advantage.  As illustrated by my initial hesitation above, risk – perceived or real – can be a deterrent for trade.  Buyers/investors often manage this risk through knowledge of the sellers credentials.  And the higher the value of the transaction, the more that this knowledge influences the purchase decision – it’s not what you know, it’s who you know.

Want a job that's fulfilling, rewarding and ongoing? (Image courtesy of Hannes Beer, https://www.flickr.com/photos/haynesmann/)

Want a job that’s fulfilling, rewarding and ongoing?
(Image courtesy of Hannes Beer, https://www.flickr.com/photos/haynesmann/)

How resilience is driving energy localisation

I’m presenting on this topic at the upcoming Clean Energy Week event in Sydney on 23 July 2014

Clean Energy Week speaker email footer

This article was originally published in Reneweconomy on 29 April 2014

How would you feel if you lost power for a week, even though your electricity provider knew this was likely to happen but did very little to avoid it?

For many in the U.S., this is exactly what happened in 2012.

In response, communities and businesses are pursuing local energy solutions.  Microgrids – one of the topics at next week’s Australian Energy Storage Conference – are being promoted by U.S. policymakers and adopted by end-users as a means of improving system resilience.

Although various definitions of microgrids exist, in simple terms they can be thought of as small-scale electricity networks that are independently capable and controllable from the surrounding grid.  While Australia has many examples of microgrids in isolated or island communities with power systems described as off-grid, standalone or remote, grid-connected applications are rare.

The increased focus on microgrids in the U.S. is being largely driven by efforts to harden the grid and reduce the impacts of events such as extreme weather.  By way of example, power outages caused by Superstorm Sandy in October 2012 cost an estimated $USD 14-26 billion and resulted in 50 deaths.  Microgrids can be used to strategically fortify critical infrastructure such as hospitals, police stations, public shelters and emergency response facilities with the ability to disconnect and connect from the main grid in times of widespread outages.

These investments to promote system resilience are aligning with other objectives to promote cleaner, smarter energy.  Project owners are incorporating increasingly larger amounts of renewable energy as a reflection of technology cost reductions and sustainability objectives.  These decisions reflect the ability to tailor microgrid design and operation to specific customer needs, in contrast to the ‘one-size fits all’ approach for regional-scale grids.

For corporates, resiliency translates as business continuity.  With the cost of unplanned outages necessitating uninterruptable and/or back-up power sources, the wider benefits and decreasing costs of microgrids are increasing their appeal.  The peak demand charges applicable to large electricity users provide an incentive for increasing levels of self-sufficiency, and are a direct input into the financial argument for commercial microgrids.  Data centres, which may access cost savings by switching to Direct Current (DC) power systems, have been identified as an early market application for microgrids.

So what of Australia?  Is our electricity system resilient?

The system vulnerabilities have already been exposed.  On 16 January 2007 around 690,000 Victorian electricity customers, including 70,000 businesses and public infrastructure services such as transport, telecommunications and healthcare, experienced electricity supply interruptions as an outcome from a fire in the northeast of the state in the vicinity of transmission lines.  Despite there being no direct loss of life and a mere 7 homes lost to the bushfires themselves, the total economic impact on the state was estimated at $500 million due to the supply interruptions alone.

The community ability to respond during the 2009 Black Saturday bushfires was severely hampered by the loss of power.  In Queensland around 200,000 people lost power after Cyclone Yasi in 2011, while some residents lost power for up to four weeks after Cyclone Larry in 2006.

Actions to address these vulnerabilities have been slow and largely superficial.  In 2010 the Australian Government released a national Critical Infrastructure Resilience Strategy that is based mainly on information sharing.

A request by Victorian distribution network operators to address climate risk in the period from 2011-15 by upgrading components of the network was declined by the Australian Energy Regulator, who were unpersuaded by the companies’ submission.

The long leadtimes of the electricity network price determination and infrastructure investment processes, combined with the steep learning curve for dealing with the ‘new normal’ of climate risk, suggests we are some way off from increased resilience being provided by the system operators.

Instead responsibility has been largely passed onto electricity users themselves.  In the wake of Cyclone Yasi, the Queensland Government published a guideline which recommended that “the relevant bodies undertake a review to identify the power supply security of critical infrastructure”.  As lead of the energy sector group for the Critical Infrastructure Resilience Strategy, the Australian Energy Market Operator advice on preparing for power interruptions is to create a business continuity plan and install back-up power supplies where appropriate.

And while network operators are currently ‘ring-fenced’ from providing services such as microgrids in a competitive market, the review of these guidelines has been deferred despite the Australian Energy Market Commission recommending they be reformed.

As per the U.S., the path forward seems therefore to be one of customer action.

Microgrids, which largely evolve incrementally from existing investments in distributed energy, represent the end-game in terms of going off-grid.  As technology cost reductions and network cost increases drive many towards this outcome, resilience simply strengthens the argument.

Microgrids for resilience: the U.S. experience... the infographic displayed at the Energy Networks 2014 conference in Melbourne

Microgrids for resilience: the U.S. experience… the infographic displayed at the Energy Networks 2014 conference in Melbourne


Intergenerational equity

This post marks the resumption of my blog after a hiatus due to the arrival of son number two (below).

Baby Thomas, born 4 July 2013

Thomas Handberg, 9pm Melb time 4 July 2013

In considering how the blog relates to Tom, the World Commission on Environment and Development (aka the Brundtland Commission) provided a landmark definition for sustainable development in 1987 as follows:

development that meets the needs of the present without compromising the ability of future generations to meet their own needs

So there you have it – I owe it to Tom to resume the blog, even if changing his pants must occasionally take precedence.

The long road for electric vehicles

This post was originally published in The Conversation, 25 June 2013

After a much-hyped return to the market in 2011, the shine has again worn off electric vehicles. High profile failures, such as the bankruptcy of charging infrastructure company Better Place, and poor sales of the vehicles themselves have bolstered the opinions of naysayers, who have variously referred to electric vehicles as “welfare wagons” and “green carpetbagging“.

But I would argue that we’re simply at the beginning of a journey: electric vehicle (EV) technology will one day have a meaningful role in a more sustainable transport future.

In line with a report released by the Victorian Government on World Environment Day, I can point to a body of both theory and evidence that allows me to make this claim with confidence.

In 1962 Everett Rogers released his seminal work, Diffusion of Innovations. In it he describes how the adoption of new technologies follows a trademark S-curve – a theory that has been proven to be correct for numerous innovations over the past century.

Rogers' technology adoption curve, where the brown line depicts the increase in market share over time, and the green/blue line depicts the distribution in market share amongst buyer types

Rogers’ technology adoption curve, where the brown line depicts the increase in market share over time, and the green/blue line depicts the distribution in market share amongst buyer types

Technology adoption curves for a range of modern innovations

Technology adoption curves for a range of modern innovations

Electric vehicle technology is on this journey. It is worth remembering that Australia’s first mobile phone and supporting cellular network were launched in 1987. At around $11,000 in today’s terms, the Walkabout TM was about the size of ten smartphones and had an hour of talk time between recharges. Seven years later the one millionth subscriber joined the network, and by 2007 subscriptions outnumbered people in Australia – 20 years after launch and not without some challenges along the way.

An important feature of this theory relates to the early adoption phase before the technology provides a financial return for adopters. During this phase, uptake is driven by the social prestige accrued by “early adopters”; after all, people are human rather than reliably rational economic beings.

Early adopters make purchase decisions the majority would view as crazy, so this phase of new market development is often portrayed with disdain for the innovation. In this phase you’ll hear a lot about high prices, low sales and proof that the innovation is a bad idea.

The longer-term view would recognise this as an unavoidable stepping stone in the adoption of new technology. Continued investment, innovation and effective marketing are required to move along the adoption curve, particularly in the lead-up to the “take-off point” for mainstream market adoption.

In the case of electric vehicles we may already be in sight of take off. California, the most advanced electric vehicle market in the world, benefits from investment by both state and federal governments who offer purchase subsidies. Combined with the effects from global investment in design and manufacturing, Californian car-buyers can now get behind the wheel of an electric vehicle for the same price as a gasoline (petrol) equivalent. Californians buy one in three plug-ins sold in United States, despite buying only one in ten vehicles overall.

Electric vehicle sales are increasing as awareness and understanding grows about their suitability for most driving tasks. At the start of this year the US Department of Energy compiled sales numbers that showed plug-in vehicles are well ahead of those of hybrid vehicles when compared at the same point in time from their introduction to the market. Note also how the resultant chart below resembles the start of those describing the theory and history of technology adoption shown above.

Plug-in Electric Vehicle (PEV) sales compared to Hybrid Electric Vehicle sales over the 24 months following their market introduction (US DoE)

Plug-in Electric Vehicle (PEV) sales compared to Hybrid Electric Vehicle sales over the 24 months following their market introduction (US DoE)

The good news doesn’t end there. After years of being pilloried as a prime example of the Obama administration’s cleantech incompetence, high-end electric vehicle company Tesla recently paid back their US Government loan nine years ahead of schedule. Investors clamouring for a piece of the action drove Tesla shares up to the point where the company valuation was 25% of General Motors.

And this momentum seems unlikely to stall. Industry reports suggest that 19 new plug-in models from 15 manufacturers are scheduled to be introduced to the US market in 2013-14. The increased availability of public charging infrastructure, especially in workplaces, will convince more and more car buyers to say “goodbye to gas“.

But what of Australia, where plug-in vehicles sales appear to be stuck in neutral?

Hope exists for our infant electric vehicle market, primarily through the spill-over benefits from uptake elsewhere. As more plug-ins are sold globally, costs will come down – so long as manufacturers bring their products to our shores.

Economic modelling from the Victorian Government has shown the most prudent path to be one where other markets bear the “first-mover” costs before we make the switch to electric vehicles once they make financial sense. But the real world is not an economic model and so more needs to be done to protect and enhance our economic competitiveness.

Mandating the installation of electric vehicle charging circuits in new housing developments is a low-cost intervention that the same modelling shows makes sense right now. For around $100 in parts and about the same amount in labour, a new home can be made EV-ready for about one-tenth the cost of a retrofit. If building rating schemes clearly recognised this, it could convince developers to make these installations where otherwise they are not.

Commitments such as these may help persuade vehicle manufacturers to bring plug-in models from their global portfolio into the Australian market.

And if we’re going to be car dependent, let’s make it easier to move towards lower cost, more environmentally friendly vehicle options.

IoT Part 3: What are the barriers to the Internet of Things?

The vision of the IoT that captures futurists is one where everything is interconnected. However there are some major obstacles to this vision being realised – privacy, security and transferring decision-making to machines.

A visual metaphor for IoT barriers

The current stink over the US National Security Agency’s gathering of ‘meta-data’ from telco’s highlights the potency of the data privacy issue in terms of both hazard and outrage. People should be concerned about who has their personal information and what they do with it. But the outrage over what the U.S. Government is doing should be a prompt for people to consider who else has their personal information and under what terms. Furthermore, there needs to be greater awareness of the value of this private information, as huge businesses have evolved to take advantage of valuable data that people are providing effectively for free. Yep, that’s a multi-faceted rant only partly related to the issue at hand.

This is background to the real issue for the IoT, in that much of the data that is of value must be gathered within privacy constraints. Locational data for private individuals is both highly valuable and highly sensitive – this is why many smartphone applications will prompt users to acknowledge when their location is to be shared. For sensors which do not have a smartphone-like interaction with users, this acknowledgement is difficult to obtain, effectively preventing a more widespread rollout OR promoting less-scrupulous behaviour in terms of monitoring without permission. As an outcome, there are constraints on IoT businesses in terms of:

  • What data they can capture without infringing upon people’s privacy
  • Undertaking the challenge in gaining people’s (conscious) endorsement for use of their data
  • Putting their business at risk of people’s outrage once they suspect their data is being captured or used against their wishes (regardless of whether this is the case)

Security is a similar issue but from the data gatherer’s perspective – what people/entities are going to gain access to their systems and what are they going to do when they get there? As an outcome, the IoT utopia of total connectivity is (rightly) impaired by protections applied to various data interactions.

As an example of the parallel nature of the privacy/security issues, consider the issue of home security. The perfect IoT solution may be a keyless security system that simply recognises the rightful occupants as they pass in an out of the residence, likely via their smartphone. This system may use a smartphone app that would monitor the movement of the residents – in a worse-case scenario this information may be used by organised crime to either gain access to unoccupied residences or even to facilitate an insurance fraud.

Verizon already offer a remote home security solution linked to an individual’s smartphone, which brings us to the next constraint – to what extent are people comfortable with handing control over to the machines?

For systems with national security implications this has long been a challenge that is dealt with by retaining ultimate control at human level. In the doomsday scenario, we simply can’t have the future of the planet being decided by a machine code “if/then” statement. By extension, individuals, companies and governments are unlikely to cede ultimate control over to machines – predominantly due to the perceived or real risks outlined above relating to privacy and security.

The outcome from this is a proliferation of standards and operating systems that are unique to each industry sector/application, that address issues specific to these sectors/applications, resulting in major obstacles to streamlined communication between devices, particularly at the total system level.

As the practical level I find that this translates to sector-specific solutions that may be addressed through integration hubs, but more often stop far short of the IoT vision that some would have you believe.

There’s still plenty of opportunity in the space however, which will be the subject of my next IoT post.

IoT Part 2: So why the interest in the Internet of Things?

This is a fairly simple post as there are three main drivers for the sudden interest in the IoT – in no particular order:

  1. Sensor module costs
  2. Wireless networks
  3. Technology trends
A microsensor with coin for scale  this is a titan compared to the sensors that can be injected into your body for medical monitoring (credit: beob8er, Flickr)

A microsensor with coin for scale – this is a titan compared to the sensors that can be injected into your body for medical monitoring (credit: beob8er, Flickr)

Sensor modules are the electrical circuitry that incorporates the transducers that sense the information and the communications chips that send it.  Other solid-state components that perform various processing, decision-making and response functions can also be added in as required, but it’s the cost reductions of the sensors and comm’s chips that are having the most effect on the IoT market.  McKinsey recently stated that sensor/actuator module costs had dropped 80 – 90% in the last 5 years, while the Financial Times recently said that sensor/comm’s modules had dropped from 50 to 15 Euro over the last 4 years.

Wireless networks have become ubiquitous and low cost.  The coverage of fast 3 and 4G networks has supported the phenomenal uptake of smartphones, and is a key enabler for the IoT.  The volume of data generated by all these connected devices may ultimately mean that bandwidth becomes a key IoT constraint – speaking from personal experience as someone who has instrumented a fleet of 40 cars to capture around 20 data fields at 5 second intervals over months at a time, the costs associated with this can still be prohibitive even if the situation is improving.

Technology trends towards cloud computing and big data analytics (amongst others) are also key IoT enablers.  The ability host data and process it away from connected devices significantly reduces costs and lifts system performance to useful or highly profitable levels.  And the ability to handle all this data and make sense of it is also becoming more widespread.  The importance of big data analytics is clear when it is considered that a Boeing 787 aircraft generates around half a terabyte of data for every long-haul flight (Financial Times 2013).

Next post – the key challenges.

Mobile miracle

This post is a tribute to Information and Communication Technology (ICT), and mobile phones in particular, as the key enabler for a better world.

In support of those 10-year-olds out there in their pitch for a mobile phone, there’s plenty of evidence in support of this theory.

International roaming

International roaming

The World Bank 2012 report Maximizing Mobile notes how:

innovation in the manufacture of mobile handsets … married with higher performance and more affordable broadband networks and services produces transformation throughout economies and societies

Similarly, The Climate Group in their equally excellent SMART 2020 report highlight how “ICT solutions can unlock emissions reductions on a dramatic scale“.

In a public policy argument, ICT addresses market failures relating to imperfect information.

ICT promotes gains from trade, by linking trading agents and reducing the barriers to trade.

Or in summary for my 10-year-old audience – everything works better with a mobile phone.

City beats country, but we gotta do better

This week two reports were released which tell us quite a bit about the opportunities and issues within urbanisation.

First there was the Global Monitoring Report 2013 from the World Bank, which reports on progress made towards the Millennium Development Goals.  The findings are clear – urbanisation is good news, with clear advantages for the quality of life for city-dwellers as compared to those in rural areas.

Infographic of the advantages and challenges of urbanisation, ref Global Monitoring Report 2013 (World Bank)

However, the report clearly states that urbanisation needs to be managed to ensure good outcomes:

  • Planning – charting a course for cities by setting the terms of urbanisation, especially policies for using urban land and expanding basic infrastructure and public services
  • Connecting – making a city’s markets (labour, goods, services) accessible to other neighbourhoods in the city, to other cities, and to outside export markets
  • Financing – finding sources for large capital outlays needed to provide infrastructure and services as cities grow and urbanisation picks up speed

This provides a neat segue to the second report for the week – Productive Cities: Opportunities in a Changing Economy by the Grattan Institute.  Although this report examines the specific challenges facing Melbourne as the inner versus outer divide increases, the issues it documents are entirely consistent with what is set out in the World Bank report above, proving that the challenges of “managed urbanisation” are not simply for developing nations.

The issues described in the Grattan Institute report reflect the land-use planning tension that is the economics of suburban sprawl versus urban densification.  The authors identify some of the obvious solutions without addressing the root causes – the politics of market-driven sprawl, local land-use planning and infrastructure financing defeating good land-use/transport planning policy.

Unfortunately there are no easy solutions, as was evidenced by yesterday’s announcement that the much needed east-west transport link across the north of Melbourne would be a road tunnel rather than rail or both.  As a transport planning decision, this reflects the realities of financing major transport infrastructure – private sector investors are far more attracted to roads than public transport, and the politics of government prevent borrowing to the extent of influencing credit-ratings or instituting road-user charging to cross-subsidise public transport.  Having done my time in the public sector making no ground on this challenge, I’ve taken what could be described as the coward’s approach by admitting defeat and embracing technology as an alternative strategy (even if it doesn’t address the root cause).

Buried away in the middle of the World Bank report is a section that looks at the influence of mobile technology:

…the extraordinary rise in mobile phone penetration has led to the emergence of a variety of innovations that allow citizens, governments, and international organizations to be more engaged and better informed, and that enable aid providers to identify and communicate more directly with beneficiaries

Numerous studies have found a positive relationship between ICT adoption and economic development in general

An evaluation of initial experiences suggests that the benefits accrue to those countries that put in place policies and programs that not only enable technological transformation but also support institutional reforms and process redesign through which services are delivered

A job for a future post is to delve more deeply into these observations and examine the crossover between the issues and opportunities of developed versus developing nations. The outcome will hopefully inform some thinking around ‘Creating Shared Value’.

How are cities growing?

This post looks more closely at the story behind the global urbanisation trend.

Global urban population by size class of settlement - note that over 50% of people reside in cities of less than 500k

Global urban population by size class of settlement – note that over 50% of people reside in cities of less than 500k

A key observation relates to the facts behind the figures (ref UNFP reports 2007 and 2011) – in my previous post I noted the headline figure that is the number of people in cities now outweighing those in rural areas, but this is not the whole story:

  • the world’s urban population will grow to 4.9 billion by 2030, whereas the rural population will decrease by some 28 million between 2005 and 2030
  • most of the urban growth will take place in developing nations – the urban population of Africa and Asia is expected to double between 2000 and 2030, whereas for the developed world it will grow from 870 million to 1.01 billion
  • the ‘second wave’ of urbanisation taking place in developing nations is occurring at a far greater size and scale than the first wave that unfolded in developed nations in the 20th century – meaning that developing nations will need to build new urban infrastructure (houses, power, water, sanitation, transport, commercial and productive facilities) more quickly than did developed nations
  • the urban growth of the 21st century will be composed largely of poor people, through both natural increase (urban births) and migration (from rural to urban areas), the balance of which varies from region to region (India more the former, China the latter)
  • over half of the world’s urban population resides in settlements with a population of less than 500,000 (refer to the chart above)
  • many of the world’s poor are migrating from the centres of the biggest cities to smaller settlements on the periphery or in satellite locations as a result of the high cost of living and scarcity of jobs

This last point has much resonance for my home town of Melbourne, where a demographic divide is forming between the affluent inner areas and the poorer but rapidly-growing urban sprawl.  The 2009 MacroMelbourne report argues that the most significant trends and challenges effecting disadvantage and inequality in metropolitan Melbourne are:

  • Rapid population growth, particularly in outer urban areas
  • The employment and economic impact of the global financial crisis
  • Rapid increases in the number of people with multiple and complex needs
  • Ongoing challenges facing migrant, refugee and Indigenous communities
  • Lack of access to affordable housing

These themes are clearly echoed in the UNFP reports, and I suspect that they are true for cities everywhere.

In reflecting upon what all this means I draw upon Porter and Kramer’s concept of Creating Shared Value to make the following observations:

  • The lessons from the older, more populous cities should inform the answers for the smaller, faster-growing settlements – however there needs to be more attention given to the specific challenges/solutions applicable to small/medium cities
  • Technology solutions, in particular relating to Information and Communications Technologies (ICT), are a key enabler to addressing many of the challenges faced
  • Low-cost, highly transferable/scaleable technology solutions represent the greatest opportunity

In future posts I will explore each of these issues/opportunities in more detail.