Microsoft | NHS Resource Center | Sidewinder’s Security Predictions for 2011 by Davey Winder

It’s that time of the year again, the bit right near the end where we start wondering if next year could possibly be any worse than this. (Bah humbug! Ed.) For those of us involved in any way with IT security the hope is, of course, that the answer will be no. One way of helping to ensure this is to have a good idea of what the emerging security threats will be.

While nobody could have foreseen the ongoing hacktivist attacks that have followed the political storm of the Wikileaks affair, for example, it’s a little bit easier to spot the kind of generic security issues that are most likely to shape the threat landscape in 2011. Here are my predictions, in no particular order as all security threats should be treated equally seriously, for the coming year:

1. Stuxnet will change things

Governments will start taking IT security more seriously, thanks to the perceived critical infrastructure risk following the Stuxnet attack on Iranian nuclear plants. Although this failed to do any serious damage, it did highlight the potential of these sophisticated threats that can target specific logic controllers rather than the carpet bombing of networks and servers.

With increased spending and research at government and military IT laboratories, there will be a trickle-down effect in terms of intelligence. This should lead to better security products and services in the long term, although the same process could also lead to increased regulation and compliance regimes which might have a negative impact on the day-to-day administration of IT security for enterprises like the NHS (auditing and reporting, for example).

2. The mobile threat will escalate

Smartphones are becoming smarter, tablets are becoming more commonplace and the bad guys are becoming more aware of the opportunities both to target these mobile devices themselves and also to use them as a lever to force a way into the networks they communicate with. As more and more health-related apps appear, so ever more medical staff will be inclined to use them and ever more bad guys will look to exploit them by installing malware or spyware alongside.

It’s more essential than ever that only authorised software is allowed to be used on mobile work devices, and only authorised mobile devices are allowed access to the network. It’s also just as important that staff are educated so as to be aware that the same care needs to be taken with personal and patient data when using a mobile device as when using a desktop terminal. Size really doesn’t matter as far as IT security is concerned.

Finally, there are the physical security issues of smartphone and tablet use. Just as we have had a plethora of media reports regarding lost and stolen USB drives packed with confidential data, so we are likely to see the same concerning lost and stolen tablets and smartphones unless the physical side of mobile security is properly addressed.

3. Computer crime will get more organised

Investment in call-centre style social engineering outfits will continue as long as such operations are seen to be profitable. This type of scam was exemplified by the 2010 ‘Microsoft Support’ con where victims were called at home and told that their computers were infected; of course, if they took the bait they soon were.

I expect to see criminal organisations become much more organised at computer crime during 2011, and that will include much better targeting when it comes to attack vectors. From spear-phishing to sophisticated malware attacks, the NHS is a gold mine of hugely attractive and valuable personal data. You can expect socially engineered attacks to become more focused on bypassing the technological measures installed to protect that data. As in 2010, humans will remain the weakest link in the security chain during 2011.

As well as social engineering, there’s also the continuing popularity of social media within the NHS to worry about. More to

4. Scams will become more social

the point, there’s the lack of an adequate educational and strategic response to this growth to worry about.

As more staff use social media for both authorised and unauthorised purposes, in the workplace and at home, the danger is that data leakage will be harder to prevent unless staff are properly educated on the risks involved and the potential consequences of failing to address them. 2011 could prove to be a tipping point for such risks as staff know plenty about using social networks but their understanding of social safety is still evolving.

5. Wireless will get stronger and weaker at the same time

The adoption of WiFi will continue to get stronger, with a greater reliance on wireless devices around NHS establishments. At the same time, WiFi security is likely to get weaker. Why so? Well, mainly because those who would like your data on a plate are not sitting still. It was almost exactly two years ago, on these very pages, that I warned about commercially available Russian software which enables anyone to hook up powerful Nvidia graphics cards and use the combined GPU power of these things to accelerate the cracking of WPA encryption on a budget.

Stuff a couple of these cards (each capable of processing hundreds of billions of fixed-point calculations per second), into a PC with 1GB of onboard memory per card; then link a few of those PCs together and you can become a supercomputing bad-ass for no more than a couple of thousand pounds. That’s peanuts to the real bad guys. Just to add to the wireless woes, earlier this year Japanese researchers managed to break WPA encryption in less than sixty seconds from start to finish. WPA2 remains a secure base level for WiFi encryption, but for how long is anyone’s guess…

6. The cloud will become clearer

I forecast that security issues surrounding, and in many cases holding back, the adoption of cloud computing will start to melt away as the benefits of doing business in the cloud will start to be matched by a better understanding of how to secure data within it. Private clouds will come to the fore, especially in security know-how, and 2011 could be the year that such private clouds start to gain acceptance within the health sector.

It has to be said, public clouds are quite another matter and I suspect we are still some way off from seeing NHS data floating around within a public cloud environment.

7. Blended threats will still work

It’s certainly not new for 2011, but the kind of blended threat that we saw being successfully implemented this year shows no sign of being diluted next year. A blended threat, as the name suggests, comprises many different attack vectors all blended into a single thrust. So you might get an attack which starts by email and then moves to the web via some link-clicking, or starts with a telephone call and then moves online. Expect more blending of social media and mobile platforms in 2011, to exploit the popularity and convergence of both.

8. Watch out for the Man in the Browser

MitB attacks, also known as Man in the Browser or Proxy Trojan attacks, will gather pace in 2011. We have already seen these put to good use by the likes of the hugely widespread Zeus Trojan for example. A MitB attack most commonly sees additional fields injected into HTML forms, with requests and replies being intercepted. To the infected end-user, all appears to be normal; meanwhile the bad guys are scraping all the personal information that is being inputted away.

That’s it for this year – which only leaves me to wish you a very happy Christmas and a safe and prosperous new year…

Global Jihad Shifting Back to Chechnya? by Alex Olesker

Global Jihad Shifting Back to Chechnya?

While in the Netherlands a few weeks ago, I was surprised to find the following story on local news: 11 Belgians, Dutch, Germans, Moroccans, and Germans were detained in a counter-terrrorism sweep through Belgium, the Netherlands, and Germany. Among the men arrested were financiers, recruiters, and radicals for the Caucasus Emirate, an extremist group seeking to create an Islamic Emirate in Ingushetia, Chechnya, and Dagestan.

What’s interesting here is that few, if any, of the men had notable ties to that region. This was simply the thrust of the local and online extremist networks. Whereas previously these men might be asked to fight in Afghanistan and Iraq or to train in Pakistan, they were being drawn to the Caucasus, where the idea of a global jihad got its start. This coincides with the rising violence in the region following the war in Georgia that reignited separatist hopes after Abkhazia and South Ossetia were given independence. Unless this European terrorist bust was a coincidence, it looks like Chechnya and the surrounding region will be seeing an influx of foreign fighters that may be already under way.

2011: The Year Ahead by Shashank Joshi for RUSI

Many world leaders will be glad to see the back of 2010 – the year was the deadliest yet for the coalition in Afghanistan, Iran raced ahead with its nuclear programme, terrorism-related fatalaties in Pakistan alone neared 7,000 [1] – but the prognosis for the twelve months ahead is hardly better. An enfeebled Europe, resurgent China, Afghanistan in stasis, and emboldened Lashkar-e-Taiba indicate that the next  year will see its fair share of violence, upheaval, and discord. Looking ahead, here are some predictions, questions and laments.


Europe will slip further into irrelevance. Britain, in possession of the most powerful military on the continent (spending more on its armed forces than Russia, in absolute terms and relative to output), has eviscerated its expeditionary capacity in the most dramatic change to its defence policy since the 1967-1971 series of decisions to withdraw forces ‘east of Suez’. [2] The Strategic Defence Review culminated in the axing of Harrier jump jets, the flagship HMS Ark Royal, Nimrod spy planes, and 42,000 defence-related jobs. Remarkably, no jet aircraft will be able to fly from British aircraft carriers until 2019.

But 2010 also highlighted much more serious holes in the European project itself. Ireland’s banking sector has imploded, Greece has a budget crisis, Portugal is buckling under enormous private-debt burdens and Spain enjoys every one of these three afflictions. The Eurozone has proven lethargic at debt restructuring, and vulnerable to persistent squabbling about how to co-ordinate both this process and fiscal stimulus. [3] For the first time since its inception, the Euro’s collapse does not seem entirely implausible – a fact only underscored by Germany’s insistence to the contrary. The once-popular notion that the Euro could replace the dollar as the world’s reserve currency is now laughable.

In this environment, November’s Anglo-French defence accord papered over the cracks. Driven by necessity, this is not the dream of European collective security envisioned by Tony Blair and Lionel Jospin twelve years ago. Europe’s problems verge on the crippling, and every ounce of political capital has already been spent on the bruising battle to conclude the disappointing Lisbon Treaty. The new European External Action Service (EEAS) will harmonise some European external policies, but not much more. Where Europe can bash out a common position, as on Iran, this will help –  but on the darker spots where it cannot, such as how to handle Russia, European institutions will be neatly sidestepped. As a major actor on the world stage, Europe has fallen from grace. [4]

NATO, in turn, has cobbled together a predictably vacuous ‘strategic concept’. It has been a damage limitation exercise, demonstrating  that nothing has done as much to underscore the limits of NATO as the alliance’s de facto disintegration in Afghanistan, evident in the tightfistedness of junior states in supplying trainers to Afghan security forces.


In 2010, the truism that the world’s economic centre of gravity is shifting eastward was brought home to the West when China’s economy eclipsed that of Japan to become the world’s second-largest. [5] The People’s Republic will continue to cast a quickly spreading shadow over nearly every area of concern to Britain and the US.

President Obama’s fawning visit to Beijing at the end of 2009 implicitly acknowledged the bruising that the US had taken in the years of crisis, along with China’s new stature. David Cameron later led a chastened trade delegation, the contrast between retrenching Britain and booming China clear to all. China’s rulers responded not with humility, or with the measured reticence that has characterised its post-reform foreign policy, but with a brash swagger that all but shredded the assiduously cultivated doctrine of ‘peaceful rise’.

In the summer, China surprisingly announced that its territorial claims in the South China Sea comprised a ‘core interest’. It responded apoplectically when Japan arrested a Chinese trawler captain in the East China Sea and threatened to try him under Japanese law. Beijing, taking a leaf out of Russia’s book in squeezing Europe with gas supplies, suspended much-needed rare earth exports, of which it produces 97 per cent of the world’s supply, to Japan for two months.

On the Korean peninsula, China rather churlishly refused to identify North Korea as responsible for the sinking of a South Korean ship and, later in the year, reacted with alarming sanguinity to the shelling of the latter’s territory.

This is all the more troubling because it follows years in which China and the United States had co-operated well on North Korea, terrorism, counter-narcotics, trade, investment and agricultural issues. Sino-Japanese relations had also improved greatly after their Koizumi-era lows. China did indeed appear to be evolving into what Washington, rather patronisingly, called a ‘responsible stakeholder’. Now, American, Japanese, South Korean, Indian and others’ perceptions of Chinese intent have markedly darkened.

Japan, shocked by its vulnerability to Chinese pressure, immediately sought to diversify its rare earth imports. By mid-summer, it had already announced an expansion of its submarine program, and in December its military chiefs unveiled a new strategy that would see a shift away from Russia-oriented heavy armour and artillery towards China-oriented mobile units, capable of responding quickly to maritime incidents in the future. [6]

South Korea – a country which has traditionally looked at Tokyo with deep wariness (for obvious historical reasons) – has upped defence co-operation with Japan, and both nations have sought to renew their close links with the US military. Southeast Asian nations are flocking to New Delhi, and India, in turn, is convinced that it is being punished by China for drawing closer to Washington over the last five years. What is without doubt is that China is bringing containment, something of which it speaks with fear and anger, upon itself.

Over the next year, three questions will be crucial.

First, will China succeed in its necessary economic reforms? It is imperative for Beijing to upgrade to higher value-added manufacturing and services, to shift the growth burden towards the domestic – rather than export – market and, importantly, to ensure that growth diffuses into the interior rather than remaining a coastal phenomenon.

This may seem tangential to international security, but 20 million workers are estimated to have lost their jobs in the aftermath of the global financial crisis. Popular demonstrations – what the official statistics call ‘mass incidents’ – have risen from 8,700 in 1993, to 40,000 in 2000, to 74,000 in 2004.  [7] An economically-driven loss of legitimacy is amongst the Chinese Communist Party’s worst nightmares. Stagnation would not only hit local economies – like Japan – hard, but nationalism and diversionary rhetoric would further shred what little regional trust is left.

Second, how will the upcoming transfer of power in 2011 be managed? If President Hu Jintao tries to cling on to his chairmanship of the Central Military Commission (as his predecessor, Jiang Zemin, did), a mild power struggle could follow. This would inject greater unpredictability into an already opaque decision-making system. Moreover, the balance of civil-military relations in China is in flux; the composition of the Politburo and other key power centres will shape the armed forces’ autonomy. Any expansion would risk increasing the possibility of standoffs and crises, particularly in the disputed waters where naval activities are hard for civilians to monitor.

Third, and most importantly, how will the US and its allies balance their enormous reliance on China as a regional economic hub with its newfound swagger? As Hilary Clinton famously asked Kevin Rudd, ‘How do you deal toughly with your banker?’ [8] As yet, no one in Washington has formulated an answer.

South Asia

In South Asia, regional tension has been a predictable constant, one that has remained at an ugly equilibrium since the Mumbai attacks of 2008. But 2011 is likely to accentuate some pernicious trends. First, the scale of Pakistan’s turmoil is ill-understood in the West. In 2010 more civilians are likely to have been killed in terrorist violence in Pakistan than in Afghanistan, an outright warzone. The catastrophic floods may have caused up to $43bn of damage –  a third of Pakistan’s GDP.

The floods also burnished the reputation of the charitable fronts for militant groups, such as Jamaat-ud-Dawa (JUD), a front for Lashkar-e-Taiba (LeT), and eroded the already pathetic reputation of the enfeebled civilian government. The human cost aside, the obliteration of Pakistani infrastructure and discrediting of state capacity are likely to furnish ideal conditions for the flourishing of violent groups directed not only at Afghanistan and India, but also at Pakistan itself. 2011 will see at least as severe levels of violence.

LeT, described in the Spring as ‘the new Al-Qa’ida’ and responsible for the Mumbai attacks, receives copious finance from Saudi sources and continues to enjoy the patronage of Pakistan’s military. [9] Its interests are global, and it will almost certainly strike a Western target within years, if not in 2011. But western intelligence agencies continue to see the group as peripheral, an irritant to India rather than a major transcontinental threat. This is a grave mistake, driven by short-term pressures to focus on Pakistan’s protection of Afghan-centric militants, such as the so-called Quetta Shura Taliban, and the need to keep Islamabad onside as long as the war in Afghanistan continues.

India, meanwhile, has suffered a torrid year in parts of Kashmir, where it has been largely unable to blame Pakistan for the discontent. Yet Washington and European capitals have responded with the facile idea that a ‘settlement’ on Kashmir would ‘fix’ the problem, end Pakistani sponsorship of militants, and cease nuclear competition. This is a pipedream, underpinned by a simplistic reading of Pakistan’s security establishment, and a failure to understand India’s focus to the east, on China, rather than to the failing state to its west.

Above all, no overt peace process will budge until India sees ‘deliverables’ – concrete Pakistani action against terrorists. This has not been forthcoming, and the US – slightly sheepishly – has not deemed it prudent to force the matter. A spectacular backchannel may be whirring away, as was the case between 2004 and 2007, but there is no indication that Pakistan’s powerful army chief General Kayani is as inclined as General Musharraf was towards a solution that would strip his organisation of much of its raison d’être. [10]


And Afghanistan? The past year has seen increasingly desperate efforts to begin ‘negotiations’, a catch-all term including both re-integration of junior Taliban fighters and reconciliation with the broader Taliban and affiliated movements. This came to a head in tragi-comic fashion in November, when British intelligence was fooled into introducing to Hamid Karzai a Pakistani shopkeeper masquerading as key Taliban commander Mullah Akhtar Mohammed Mansour, who consequently pocketed up to half a million dollars. Although this will inflict no lasting damage, it epitomizes the tribulations of what has been the coalition’s deadliest year in Afghanistan yet.

Optimists insist that the counterinsurgency strategy introduced by the US at the end of 2009 is yet to bear fruit. Counterinsurgency does, indeed, take time, as security spreads outwards from small spots where troops are concentrated. Moreover, the coalition is contending with the legacy of eight years in which the war was treated as souped-up peacekeeping, and starved of resources and attention by the conflict in Iraq.

But the town of Marjah may be a troubling microcosm for the war’s wider trajectory. In February 2010, Major General Nick Carter insisted that ‘in three months’ time or thereabouts, we should have a pretty fair idea about whether we have been successful.’ [11] Only after ten months, in December, could the battle be declared ‘essentially over’ – as late as September, complex guerrilla attacks on troops were daily occurrences and civilian life was far from normal.

President Obama is almost certain to resist domestic pressure to conduct a serious withdrawal in the summer of next year, as was initially promised. US troops will remain in force for five years.

At root, Obama and his advisers are not satisfied that a resurgence of the Taliban would not topple Karzai’s venal and ineffective government, which in turn would furnish more extremist groups allied with Al-Qa’ida, such as the Haqqani Network, with breathing space. Fighting will continue in 2011, and it will be fierce even as the next year draws to a close.

But what is most worrying is complacency amongst some officials. Mark Sedwill – NATO’s top diplomat in Afghanistan – claimed in the autumn that children were safer growing up in Afghan cities than in London, New York or Glasgow. This remarkably Kabul-centric observation was met with open derision, but it reflects the desperate struggle between NATO and the Taliban for the narrative of an unloved war.

The Road Ahead

What, then, will be the narrative of 2011? Above all, the picture that emerges is of the ‘weary titan’ – staggering, as Joseph Chamberlain observed of Britain, ‘under the too vast orb of his fate.’

Washington will look on with dismay as 100,000 American troops labour in service of a hollow and corrupt narco-state whose security remains precarious and borderlands remain riddled with overlapping layers of local and global terrorists. Across those borders, to the east, lies a militarised nuclear-weapons power with a virtually unbroken history of state-sponsored terrorism. This crumbling state, Pakistan, is in the process of being handed fresh nuclear reactors by an Asian giant whose naval strength and bullying tactics have shaken up the longstanding faith in regional growth around a Chinese pivot. To the west lies a nuclear aspirant, Iran, which has spent the year thumbing its nose at its critics, largely with success.

Europe looks upon this process from afar, mired in debt and shrunken in influence, gradually sending home its ineffectual contributions to the Afghan debacle. Others, like Turkey, India and Brazil, bide their time, engaging carefully with the US but shunning the ‘responsible stakeholder’ straitjacket that Washington would delight in foisting upon them.

In 2004, Karl Rove famously insisted that ‘we’re an empire now, and when we act, we create our own reality.’ [12] We now know that Rove spoke at the inflection point of the ‘unipolar moment’. 2010 has vindicated the lessons of the years since. Creating reality is hard to do, because reality resists. Futurologists may fear the black swans that jerk the international system from its orderly trajectory, but the real troubles of 2011 will be those that can be traced in an unbroken line from the previous year. The violence of emerging terrorist groups, the remarkable vigour of the Afghan insurgency, the nuclear ambitions of state sponsors of terrorism, and the ominously shifting balance of power in Asia – when the last American soldier leaves Iraq on 31 December 2011, it will still be these questions that keep those in the corridors of power awake.

Shashank Joshi is a doctoral candidate at the Department of Government, Harvard University.

The views expressed here do not necessarily reflect those of RUSI.


[1] South Asia Terrorism Portal, ‘Fatalities in terrorist violence in Pakistan 2003-2010’, updated 5 December 2010,    accessed 14 December 2010

[2] Trevor Taylor, ‘What’s New? UK Defence Policy Before and After the SDSR’, RUSI Journal (December 2010)

[3] Barry Eichengreen, Europe’s Inevitable Haircut, Project Syndicate, 9 December 2010, <; accessed 14 December 2010

[4] Sophie C. Brune et al., Restructuring Europe’s Armed Forces in Times of Austerity, RUSI report, 3 December 2010

[5] Financial Times, ‘Chinese economy eclipses Japan’s’, 16 August 2010

[6] New York Times, ‘Japan Plans Military Shift to Focus More on China’,12 December 2010

[7] William Overholt, ‘China in the Global Financial Crisis’, The Washington Quarterly (January 2010), p28

[8] The Guardian, ‘US Embassy Cables: Hilary Clinton ponders US relationship with its Chinese ‘banker”, 4 December 2010 accessed 14 December 2010

[9] Ashley J. Tellis, ‘Bad Company – Lashkar e-Tayyiba and the Growing Ambition of Islamist Militancy in Pakistan, Testimony to the US Congress’, 11 March 2010, accessed 14 December 2010

[10] Steve Coll, ‘The Back Channel’,, accessed 14 December 2010

[11] BBC News, ‘Afghanistan: Marjah battle not won yet’, 24 June 2010, accessed 14 December 2010

[12] Ron Suskind, ‘Faith, Certainty and the Presidency of George W. Bush’, The New York Times Magazine, 17 October 2004

Further Analysis: NATO, International Institutions, Global Security Issues, Terrorism, Defence Management, India, Central and South Asia, Afghanistan, Pakistan, Middle East and North Africa, Europe, United States, Americas, China, Pacific, Japan

OSINT Research: Iranian Nuclear Program by Recorded Future and Ninja Shoes

The Iranian nuclear program has a long history (starting in the 1970s) and is clouded in secrecy and drama. Here we will demonstrate how Recorded Future can be used for OSINT analysis of the program.

To start, we create a watchlist of Iranian nuclear facilities – using the new Watchlist functionality in Recorded Future. This can be used to quickly find relationships between facilities, people involved in them, related events, etc.

With that we can easily review 2010 for all known Iranian nuclear facilities – and for example find a major peak around August 21 – when the Bushehr reactor was first loaded with fuel rods – Vice President Ali Akbar Salehi said August 21 was: “a day to remember in the fluctuating history of Bushehr Nuclear Power Plant.”

With that we might be curious – who else has made comments around the Iranian nuclear facilities? This could be potentially people worth tracking. By exploring the network of Iranian Nuclear Facilities (using our watchlist) we can see the auto generated network, as well as drill into an particular node of it.

We might get interested in Sergei Kiriyenko who has a prominent role in the network – and do a follow-up question on Quotation Sergei Kiriyenko to find interesting comments by him on the Bushehr facility – and we find this:

Given that we might look for any comments regarding the future of Iranian nuclear facilities and find for example Ali Akbar Salehi commenting on Bushehr being at full pace in March of 2011.

We can extend this to look for future indications into end of 2012 re: Iranian nuclear facilities.

Exploring the network around the Bushehr facility we realize there’s a prominent harbor there – and drill into the network around that.

We find people like Ali Akbar Salehi (head of Iranian nuclear program) – perhaps obvious. We find Mohammad Rastad (head of the port) who’s more interesting – and drilling into him we find an interesting connection to Germany – a country that had early involvment in the Iranian nuclear program.

Mehr News Agency quoted Mohammad Rastad as saying, “Hansa India is 243 meters long and 32 meters wide and is the second-largest ship ever entering this port and holds the highest number of containers among ships landing in Bushehr.” Hansa India has been hired by Germany and is registered in Hamburg, he added.”

Likewise we find an interesting connection to Vladimir Putin in the network.

This originates from this article – “Iran is entitled to peaceful use of nuclear energy under international supervision. Sergei Kiriyenko said at a meeting with Prime Minister Vladimir Putin that the uranium fuel will be loaded into the reactor in Iran’s southern port of Bushehr on Saturday.”


Analyzing a an entire government program, such as the Iranian nuclear program, is very complex. Open sources can be very helpful, but the broad spectrum of entities to cover: facilities, people, companies, technologies, are large. Recorded Future’s watchlist functionality combined with temporal and network queries can make a big difference.

Sound like something you’re interested in? Contact our Federal Team today!

Top 10 renewable energy technology breakthroughs in 2010 by Anupam

With our planet in a desperate need of new eco-friendly energy generating systems, researchers over the globe have been working hard to develop systems that can power the world of the future in a sustainable fashion. The year 2010 saw some great breakthroughs in the field of renewable energy technology, which when fully developed, could help create a better world. Here we have compiled a list of 10 such breakthroughs that are bound to have a significant impact in the future.

• IBM’s solar cell created from “earth abundant” materials

ibm makes a solar cell

Researchers at IBM created an inexpensive solar cell from materials that are dirt cheap and easily available. The layer that absorbs sunlight and converts it into electricity is made with copper, tin, zinc, sulfur and selenium. The best part of the solar cell is that it still manages to hit an efficiency of 9.6 percent, which is much higher than earlier attempts to make solar panels using similar materials.

• MIT’s Concentrated Solar Funnel

mits solar funnel

A group of researchers at MIT devised a way to collect solar energy 100 times more concentrated than a traditional photovoltaic cell. The system could drastically alter how solar energy is collected in the near future as there will no longer be a need to build massive solar arrays to generate large amounts of power. The research work conducted has determined that carbon nanotubes will be the primary instrument used in capturing and focusing light energy, allowing for not just smaller, but more powerful solar arrays.

• Wake Forest University’s Light Pipes

light pipes

Researchers at the Wake Forest University in North Carolina made a breakthrough by developing organic solar cells with a layer of optical fiber bristles that doubles the performance of the cells in tests. The prototype solar cell has been developed by David Carroll, who is the chief scientist at a spin-off company called FiberCell. The problem with standard flat panels is that some sunlight is lost through reflection. To reduce this effect, the research team took a dramatic approach by stamping optical fibers onto a polymer substrate that forms the foundation of the cell. These fibers, dubbed the “Light Pipes,” are surrounded by thin organic solar cells applied using a dip-coating process, and a light absorbing dye or polymer is also sprayed onto the surface. Light can enter the tip of a fiber at any angle. Photons then bounce around inside the fiber until they are absorbed by the surrounding organic cell.

• Louisiana Tech University’s CNF-PZT Cantilever

louisiana tech researchers fabricate breakthrough energy harvesting device

Created by a research team at Louisiana Tech University, the CNF-PZT Cantilever is a breakthrough energy harvesting device, which utilizes waste heat energy from electronic gadgets to power them. The device features the use of a carbon nanotube on a cantilever base of piezoelectric materials. The carbon nanotube film absorbs heat and forces the piezoelectric cantilever to bend, which then generates an electric current in the material. The device is so small that thousands of small CNF-PZT Cantilever devices can be designed into devices, allowing them to harvest their own wasted energy.

• New Energy Technologies’ see-through glass SolarWindow

new energy technologies to unveil see through glass solarwindow

New Energy Technologies developed a working prototype of the world’s first glass window capable of generating electricity. Until now, solar panels have remained opaque, with the prospect of creating a see-through glass window capable of generating electricity limited by the use of metals and other expensive processes, which block visibility and prevent light from passing through glass surfaces. The technology has been made possible by making use of the world’s smallest working organic solar cells, developed by Dr. Xiaomei Jiang at the University of South Florida. Unlike conventional solar systems, New Energy’s solar cells generate electricity from both natural and artificial light sources, outperforming today’s commercial solar and thin-film technologies by as much as 10-fold.

• Purdue University’s system to harvest heat from car’s exhaust

new car engine generates electricity from exhaust heat

Researchers at Purdue University created a system that harvests heat from a car’s exhaust in order to generate electricity and reduce the vehicle’s fuel consumption. The system converts waste heat into electricity, which is then fed into the vehicle’s onboard batteries to reduce engine load and fuel consumption.

• Innowattech’s Piezoelectric IPEG PAD

new piezoelectric railways harvest energy from passing trains

Innowattech recently created piezoelectric generators that can be used as normal rail pads, but generate renewable energy whenever trains pass on them. The company tested the technology by replacing 32 railway pads with new IPEG PADs, where the pads were able to generate enough renewable electricity to determine the number of wheels, weight of each wheel and the wheel’s position. In addition the speed of the train and wheel diameter could also be calculated. The company states that areas of railway track that get between 10 and 20 ten-car trains an hour can be used to produce up to 120KWh of renewable electricity per hour, which can be used by the railways or transferred to the grid.

• Sony’s Flower Power

flower power dssc

Sony recently demonstrated new DSSC’s for energy generating windows, which help beautify your home as well. The beautifully designed solar panels make use of screen printing to generate custom designs according to the consumer’s preferences. The panels can be developed in any color that the user specifies.

• Plant mimicking machine produces fuels using solar energy

new plant mimicking machine generates fuels using solar energy

A team of researchers in the US and Switzerland have created a machine that like plants uses solar energy to produce fuels, which can later be used in different ways. The machine makes use of the sun’s rays and a metal oxide called ceria to break down carbon dioxide or water in fuels that can be stored and transported. Unlike solar panels, which work only during the day, this new machine is designed to store energy for later use.

• CSIRO’s Brayton Cycle Project

australias largest solar thermal tower system generates power using sun and air

Australia’s national science agency, CSIRO, developed a technology that requires only sunlight and air to generate electricity. The system is ideal for areas that face acute water shortages. The solar Brayton Cycle project replaces use of concentrated sun rays to heat water into high-pressure steam to drive a turbine with solar energy to create a solar thermal field. The technology focuses the sun’s rays projected onto a field of mirrors knows as heliostats onto a 30-meter (98 ft) high solar tower to heat compressed air, which subsequently expands to through a 200kW turbine to generate electricity.

High-Flux Solar-Driven Thermochemical Dissociation of CO2 and H2O Using Nonstoichiometric Ceria

  1. William C. Chueh1,
  2. Christoph Falter2,
  3. Mandy Abbott1,
  4. Danien Scipio1,
  5. Philipp Furler2,
  6. Sossina M. Haile1,* and
  7. Aldo Steinfeld2,3,*

+ Author Affiliations

  1. 1Materials Science, California Institute of Technology, MC 309-81, Pasadena, CA 91125, USA.

  2. 2Department of Mechanical and Process Engineering, Eidgenössische Technische Hochschule (ETH) Zürich, 8092 Zürich, Switzerland.

  3. 3Solar Technology Laboratory, Paul Scherrer Institute, 5232 Villigen PSI, Switzerland.
  1. *To whom correspondence should be addressed. E-mail: (S.M.H.); (A.S.)


Because solar energy is available in large excess relative to current rates of energy consumption, effective conversion of this renewable yet intermittent resource into a transportable and dispatchable chemical fuel may ensure the goal of a sustainable energy future. However, low conversion efficiencies, particularly with CO2 reduction, as well as utilization of precious materials have limited the practical generation of solar fuels. By using a solar cavity-receiver reactor, we combined the oxygen uptake and release capacity of cerium oxide and facile catalysis at elevated temperatures to thermochemically dissociate CO2 and H2O, yielding CO and H2, respectively. Stable and rapid generation of fuel was demonstrated over 500 cycles. Solar-to-fuel efficiencies of 0.7 to 0.8% were achieved and shown to be largely limited by the system scale and design rather than by chemistry.

  • Received for publication 15 September 2010.
  • Accepted for publication 23 November 2010.

The Political Power of Social Media by Patrick Meier

Clay Shirky just published a piece in Foreign Affairs on “The Political Power of Social Media.” I’m almost done with writing my literature review of digital activism in repressive states for my dissertation so this is a timely write-up by Clay who also sits on my dissertation committee. The points he makes echo a number of my blog posts and thus provides further support to some of the arguments articulated in my dissertation. I’ll use this space to provide excerpts and commentary on his 5,000+ word piece to include in my literature review.

“Less than two hours after the [Philippine Congress voted not to impeach President Joseph Estrada], thousands of Filipinos […] converged on Epifanio de los Santos Avenue, a major crossroads in Manila. The protest was arranged, in part, by forwarded text messages reading, ‘Go 2 EDSA. Wear blk.’ The crowd quickly swelled, and in the next few days, over a million people arrived, choking traffic in downtown Manila.”

“The public’s ability to coordinate such a massive and rapid response — close to seven million text messages were sent that week — so alarmed the country’s legislators that they reversed course and allowed the evidence to be presented. Estrada’s fate was sealed; by January 20, he was gone. The event marked the first time that social media had helped force out a national leader. Estrada himself blamed ‘the text-messaging generation’ for his downfall.”

“As the communications landscape gets denser, more complex, and more participatory, the networked population is gaining greater access to information, more opportunities to engage in public speech, and an enhanced ability to undertake collective action. In the political arena […] these increased freedoms can help loosely coordinated publics demand change.”

See this blog post on Political Change in the Digital Age: The Prospect of Smart Mobs in Authoritarian States.

“The Philippine strategy has been adopted many times since. In some cases, the protesters ultimately succeeded, as in Spain in 2004, when demonstrations organized by text messaging led to the quick ouster of Spanish Prime Minister José María Aznar, who had inaccurately blamed the Madrid transit bombings on Basque separatists. The Communist Party lost power in Moldova in 2009 when massive protests coordinated in part by text message, Facebook, and Twitter broke out after an obviously fraudulent election.”

“There are, however, many examples of the activists failing, as in Belarus in March 2006, when street protests (arranged in part by e-mail) against President Aleksandr Lukashenko’s alleged vote rigging swelled, then faltered, leaving Lukashenko more determined than ever to control social media. During the June 2009 uprising of the Green Movement in Iran, activists used every possible technological coordinating tool to protest the miscount of votes for Mir Hossein Mousavi but were ultimately brought to heel by a violent crackdown. The Red Shirt uprising in Thailand in 2010 followed a similar but quicker path: protesters savvy with social media occupied downtown Bangkok until the Thai government dispersed the protesters, killing dozens.”

“The use of social media tools — text messaging, e-mail, photo sharing, social networking, and the like — does not have a single preordained outcome. Therefore, attempts to outline their effects on political action are too often reduced to dueling anecdotes.”

Clay picks up on some of my ongoing frustration with the “study” of digital activism. He borrows his dueling analogy from some of my earlier blog post of mine in which I chide the popular media for sensationalizing anecdotes. See for example:

“Empirical work on the subject is also hard to come by, in part because these tools are so new and in part because relevant examples are so rare. The safest characterization of recent quantitative attempts to answer the question, Do digital tools enhance democracy? (such as those by Jacob Groshek and Philip Howard) is that these tools probably do not hurt in the short run and might help in the long run — and that they have the most dramatic effects in states where a public sphere already constrains the actions of the government.”

Reading this made me realize that I need to get my own empirical results out in public in the coming weeks. As part of my dissertation research, I used econometric analysis to test whether an increase in access to mobile phones and the Internet serves as a statistically significant predictor of anti-government protests. So I’ll add this to my to-do list of blog posts and will also share my literature review in full as soon as I’m done with that dissertation chapter.

In the meantime, have a look at the Global Digital Activism Dataset (GDADS) project that both Clay and I are involved in to spur more empirical research in this space.

Although the story of Estrada’s ouster and other similar events have led observers to focus on the power of mass protests to topple governments, the potential of social media lies mainly in their support of civil society and the public sphere — change measured in years and decades rather than weeks or months. [We] should likewise assume that progress will be incremental and, unsurprisingly, slowest in the most authoritarian regimes.

I wrote up a blog post just a few weeks ago on “How to Evaluate Success in Digital Resistance: Look at Guerrilla Warfare,” which makes the same argument. Clay goes on to formulate two perspectives on the role of social media in non-permissive environments, the instrumentalist versus environmental schools of thought.

“The instrumental view is politically appealing, action-oriented, and almost certainly wrong. It overestimates the value of broadcast media while underestimating the value of media that allow citizens to communicate privately among themselves. It overestimates the value of access to information, particularly information hosted in the West, while underestimating the value of tools for local coordination. And it overestimates the importance of computers while underestimating the importance of simpler tools, such as cell phones.”

“According to [the environmental view], positive changes in the life of a country, including pro-democratic regime change, follow, rather than precede, the development of a strong public sphere. This is not to say that popular movements will not successfully use these tools to discipline or even oust their governments, but rather that U.S. attempts to direct such uses are likely to do more harm than good. Considered in this light, Internet freedom is a long game, to be conceived of and supported not as a separate agenda but merely as an important input to the more fundamental political freedoms.”

One aspect that I particularly enjoy about Clay’s writings is his use of past examples from history to bolster his arguments.

“One complaint about the idea of new media as a political force is that most people simply use these tools for commerce, social life, or self-distraction, but this is common to all forms of media. Far more people in the 1500s were reading erotic novels than Martin Luther’s “Ninety-five Theses,” and far more people before the American Revolution were reading Poor Richard’s Almanack than the work of the Committees of Correspondence. But those political works still had an enormous political effect.”

“Just as Luther adopted the newly practical printing press to protest against the Catholic Church, and the American revolutionaries synchronized their beliefs using the postal service that Benjamin Franklin had designed, today’s dissident movements will use any means possible to frame their views and coordinate their actions; it would be impossible to describe the Moldovan Communist Party’s loss of Parliament after the 2009 elections without discussing the use of cell phones and online tools by its opponents to mobilize. Authoritarian governments stifle communication among their citizens because they fear, correctly, that a better-coordinated populace would constrain their ability to act without oversight.”

Turning to the fall of communism, Clay juxtaposes the role of communication technologies with the inevitable structural macro-economic forces that lifted the Iron Curtain.

“Any discussion of political action in repressive regimes must take into account the astonishing fall of communism in 1989 in eastern Europe and the subsequent collapse of the Soviet Union in 1991. Throughout the Cold War, the United States invested in a variety of communications tools, including broadcasting the Voice of America radio station, hosting an American pavilion in Moscow  […], and smuggling Xerox machines behind the Iron Curtain to aid the underground press, or samizdat.”

“Yet despite this emphasis on communications, the end of the Cold War was triggered not by a defiant uprising of Voice of America listeners but by economic change. As the price of oil fell while that of wheat spiked, the Soviet model of selling expensive oil to buy cheap wheat stopped working. As a result, the Kremlin was forced to secure loans from the West, loans that would have been put at risk had the government intervened militarily in the affairs of non-Russian states.”

“In 1989, one could argue, the ability of citizens to communicate, considered against the background of macroeconomic forces, was largely irrelevant. Communications tools during the Cold War did not cause governments to collapse, but they helped the people take power from the state when it was weak. […]. For optimistic observers of public demonstrations, this is weak tea, but both the empirical and the theoretical work suggest that protests, when effective, are the end of a long process, rather than a replacement for it.”

Clay also emphasizes the political importance of conversation over the initial information dissemination effect:

“Opinions are first transmitted by the media, and then get echoed by friends, family members, and colleagues. It is in this second, social step that political opinions are formed. This is the step in which the Internet in general, and social media in particular, can make a difference. As with the printing press, the Internet spreads not just media consumption but media production as well — it allows people to privately and publicly articulate and debate a welter of conflicting views.”


How about the role of social media in organization and coordination?

“Disciplined and coordinated groups, whether businesses or govern-ments, have always had an advantage over undisciplined ones: they have an easier time engaging in collective action because they have an orderly way of directing the action of their members. Social media can compensate for the disadvantages of undisciplined groups by reducing the costs of coordination. The anti-Estrada movement in the Philippines used the ease of sending and forwarding text messages to organize a massive group with no need (and no time) for standard managerial control. As a result, larger, looser groups can now take on some kinds of coordinated action, such as protest movements and public media campaigns, that were previously reserved for formal organizations.”

I’m rather stunned by this argument: “Social media can compensate for the disadvantages of undisciplined groups by reducing the costs of coordination.” Seriously? If a group is unorganized and undisciplined, advocating that it use social media—particularly in a repressive environment—is highly inadvisable. Turning an unorganized and undisciplined mob into a flash mob thanks to social media tools does not make it a smart mob. Clay’s argument directly contradicts the  rich empirical research that exists on civil resistance in authoritarian states.


“For political movements, one of the main forms of coordination is what the military calls ‘shared awareness,’ the ability of each member of a group to not only understand the situation at hand but also understand that everyone else does, too. Social media increase shared awareness by propagating messages through social networks.”

I wonder what role the Ushahidi platform can play in this respect.

New Technologies Will Drive Federal Mobility In 2011 by Tom Temin

Thomas R. Temin – Editor in chief of FedInsider and brings 30 years of publishing experience in media and information technology. Tom is also co-host of The Federal Drive with Tom Temin and Amy Morris, a weekday morning news and talk program on WFED AM 1500 in Washington D.C.


  • Various analysts, such as those at Gartner, are saying that the estimated 55 million tablets to be shipped in 2011 will show a 181 percent increase over 2010 sales of 19.5 million. Forrester Research predicts that tablets will overtake notebook sales in 2012, and desktop PCs by 2013.
  • Tablets are not new, but until the iPad popularized them with a new form factor and functionality, they had been strictly a niche product, typically taking the form of a mechanically modified notebook computer. Now the iPad has competition from Samsung, Sony and Microsoft-HP. That last one has been disappointing, but CEO Steve Ballmer is promising a Windows 7 tablet soon that will be competitive with the iPad.
  • Smartphone sales rose again in 2010, after falling slightly in 2009 thanks to the worldwide recession. But within that market, things are changing. One market researcher notes that for carrier Verizon, in the course of a year, Android phones went from being a single digit part of smartphone sales to 80 percent. The loser? The clunky Blackberry, which is fast losing market share.
  • Estimates of what the PC market will do are all over the place. But the traditional notebook and standard desktop or under-desk configurations will still exist for a long time. A big factor here is desktop virtualization, in which users have thin clients on their desks but their operating system and applications are all running as virtual machines on a blade somewhere down the hall or across the country.

What does all this mean for the government market?

In 2011, I believe the trend away from the traditional PC for federal workers will accelerate. The government is still largely wedded to the model of notebooks for mobile workers and desktops for everyone else. Plus a Blackberry for phone and e-mail. And why not? It’s worked all these years.

But the way people compute now is more fluid and ubiquitous. The smartphones (Android, iPhone) and the tablet (iPad and still emerging) give close to full online computing via Wi-Fi or broadband, but with far more portability and speed than a notebook can give. This is driven in no small part by the battery dynamics of smartphone and tablets. Most of them can operate for at least a full workday without being turned off. They are basically always on.

Several specific federal initiatives are at play here, too.

Cloud computing is driving use of these devices. The most expensive iPad has only 128 gigabytes of storage in a solid state drive. The average PC ships with a 500G or terabyte. But storage of large numbers of files on a local c:/ drive makes less sense when access from mobile devices from anywhere to files in a cloud environment is becoming the norm.

Virtualization is also about to change the decisions agencies make for end-user devices. A blade server can hold a dozen virtual machines or more, giving the CIO shop a much easier way to manage, update and patch users. When user assets and characteristics are virtualized and stored in either the agency data center or a cloud, they can potentially be accessed by any number of devices from anywhere there is wireless service or an Ethernet jack.

Telework can take on much richer meaning with mobility. Until now, the prevailing model was office, home or telework center. But now everyone can be mobile. Assuming competent encryption and other security mechanisms are in place — admittedly a big assumption — telework becomes a much more attractive option. For example, contractors are always working at government locations. Why can’t federal people work at contractor locations during critical stages of projects? With broadband and the agency virtual private network, a federal employee wouldn’t even need to use the contractor’s network. This model can even be more secure than with use of a notebook PC with its fat hard drive. If the machine is lost or stolen, there need be no danger of unauthorized access to go

The conservative case for Net neutrality: Letting the big ISPs impose discriminatory pricing would stifle innovation, cripple content providers, and ultimately damage the broader economy by Bill Synder

Hey there, conservatives: Net neutrality [1] is your issue, too.

Innovation, economic growth, and the health of content providers are what’s at stake as the FCC moves toward a new set of rules [2] governing the Internet. Until now, much of the discussion about the future of the Internet has focused on issues like freedom of expression, fairness, and metered pricing — real concerns, to be sure. But a pair of academic research papers circulated by the Open Internet Coalition puts the issue in economic perspective.

[ Moves by Apple to block apps from the iPhone [3] show why legislation to preserve equal access to the Internet is needed now more than ever. | AT&T talks the talk about Net neutrality [4], but don’t believe it: The big carriers have a different idea. ]

Here’s the core of the argument, in a paper by Inimai M. Chettiar and J. Scott Holladay [5] of New York University’s Institute for Policy Integrity:

Without Net neutrality rules, new technologies could lead to pricing practices that transfer wealth from content providers to ISPs, a form of price discrimination that would reduce the return on investment for Internet content — meaning Web site owners, bloggers, newspapers, and businesses would have less incentive to expand their sites and applications.

What’s more, developers and IT as a whole will be hurt if providers are allowed to discriminate against particular applications that might make money for someone else.

The Net neutrality issue is sometimes framed by the usual left/right split in American public life. But I’d argue that conservatives who believe in a free market should join libertarians — and, yes, liberals — in the fight for an open Net.

What a neutral Internet really means
Here’s how the Internet works today: “Last-mile facilities-based broadband Internet access service providers provide users with access to the Internet, but they are expected to route all traffic in a nondiscriminatory manner. They do not charge Internet content or application providers to reach users, and they are expected to route traffic without regard to what that traffic contains, who it is from, or where it is going,” writes Christiaan Hogendorn, a Wesleyan University economist.

To date, those principles have worked really well. The Internet has for years arguably been the most efficient engine of economic growth and job creation in the American economy. But it won’t function nearly as well if the market is rigged by ISPs.

The argument about Net neutrality has been clouded by understandable confusion about what it really means — and what it doesn’t mean.

Many people think the issue has to do with metered broadband access — that is, paying for data access by the gigabyte instead of a flat monthly rate. That’s something that many of us might object to, but for better or worse, it’s a choice the carriers may well make. The truth is, the issue of metering has nothing to do with neutrality. And frankly, the market will decide if metered pricing is a viable idea.

The real issues are more subtle. Without Net neutrality, say the researchers, “ISPs could charge content providers again when users access content. Adding these fees would increase the costs of creating Web sites and applications.”

Providers don’t talk about that directly. Instead they talk about “fast lanes” to the Internet. After all, why shouldn’t a company that wants its customers to have faster access pay more? Well, this argument sounds reasonable at first, but think about the implications: If there’s a fast lane, there has to be a slow lane. And companies stuck in that slow lane — likely to include competitors to the providers and to the providers’ business partners — are going to lose business.

Should ISPs decide which technologies will prosper?
The second key issue is about the right of users to access applications of their choosing, and the right of developers to compete on an open playing field. Or as FCC chairman Julius Genachowski said as the commission discussed the issue this week: “Specifically, this proceeding is about preserving consumers’ freedom to access lawful content and applications of their choosing over the Internet; produce and distribute content; and innovate without permission to create new businesses, services, and opportunities that no one has dreamed of yet.”

The providers deliberately obfuscate the issue, by talking about “bandwidth hogs” who download too much video and play too many online games. Why shouldn’t they charge video or game providers extra, since their users are clogging up the pipes?

Again, that issue can be solved by metered pricing. But once the principle has been established that an ISP can discriminate (in the economic sense) against a particular application or technology, the market isn’t free to pick winning technologies.

There’s no telling where the next best idea will come from. It could well be a small company we’ve never heard of, but if those innovators face a discriminatory pricing wall, their ideas will never get off the ground.

The ISPs argue that additional revenue raised by new pricing schemes would allow them to spend more on badly needed infrastructure. Don’t believe it. “Most additional revenue generated for ISPs is likely to be transferred to their shareholders rather than invested in expanding broadband lines,” say the NYU researchers.

I want to close this by appealing to conservatives: If you believe in the free market and the ability of the Internet to drive innovation and create jobs and real economic growth, tell the FCC that you support Net neutrality.

I welcome your comments, tips, and suggestions. Post them here so all our readers can share them, or reach me at [6].

This article, “The conservative case for Net neutrality [7],” was originally published at [8]. Follow the latest developments on Net neutrality [9] at

Bridging the Digital Divide: Why Mobiles, Markets and Moore’s Law Matter by Jordon Hosmer-Henner

This Saturday, remember to toast the birth of the World Wide Web—just don’t buy it a drink. It was only twenty years ago that Tim Berners-Lee loaded the first webpage. Now it’s such an indispensable aid to modern life that researchers are evaluating not if, but how it’s rewiring our brains. That is of course if you are one of the lucky third of the population that has access. The pace of adoption however must make one feel optimistic that soon children won’t remember a world where it hasn’t been possible to ‘ask the internet’ from a mobile.

While the long term impact of web access is exciting, the much more important development has been the exponential growth in data rates and the adoption of SMS. The brilliance of SMS was realizing that sufficiently short texts could be encoded into the control messages that the network already had to send to keep voice data flowing smoothly. It’s taken 18 years, but we’re starting to see tools being built that use SMS to do everything from record medical information to browse job listings. The important thing to remember is that these tools need to be adopted to be useful and that’s not going to happen if they aren’t applicable. Local ownership and innovation is critical in developing effective solutions which is why the trend toward open sourced software is so encouraging.

SMS and mobile communications have penetrated society faster than any prior information technology. Billions of individuals own mobile phones and it’s predicted that next year 85% of new handsets will be able to access the Internet. One of the primary reasons is the dramatic cost savings between installing land lines and wireless towers. Wireless standards are rapidly improving which means developing countries can leapfrog expensive and outdated copper networks. What’s more users see the cost savings in getting a twenty dollar phone that can save hours of travel or become an essential business tool.

It’s hard not to be astounded by the impact the Internet has had in the developed world. From commerce to dating there is hardly an aspect of society that hasn’t been altered. Expressed more broadly advances in information sharing simplify and facilitate transactions (whether in love or business). I’m willing to transfer money to a stranger because I’ve seen the testimony of others who’ve done the same and were satisfied with their mint condition Beanie Baby. The same principal is being applied to help farmers sell products in distant markets. Not to be hyperbolic, but the impact of having billions of people with phones faster than early supercomputers constantly connected to a global communications network at rates capable of transmitting streaming video is going to be fairly substantial.

Creativity Can Lessen Leader Image by Karen Hopkin

Think of a quality that defines a strong leader. Do I hear: dynamic, driven, decisive, original? Well, I probably didn’t hear “original.” Because people who are considered “creative” are generally not viewed as leaders. That’s according to a study in the Journal of Experimental Social Psychology. [Jennifer Mueller, Jack Goncalo and Dishan Kamdar, Recognizing creative leadership: Can creative idea expression negatively relate to perceptions of leadership potential?]

People who show imagination can be seen as dreamers because their ideas have not been proven. Those seen as leaders, on the other hand, are expected to maintain order and to keep things moving forward. Yet in today’s business world, companies say they’re looking for creative CEOs who can promote change and lead their businesses in profitable new directions.

To examine what we really think about creativity, scientists asked students to present ideas for how airlines can get more revenue from their passengers. Half the students were told to come up with novel solutions, and the rest were asked to stick with something more tried-and-true. Other students who then listened to these pitches rated those who were innovative as having less leadership potential.

So go ahead, think outside the box. But if you want to scale the corporate ladder, you might consider keeping your most interesting ideas under wraps. At least until you’ve nailed that corner office.

MS quadrupling Kinect accuracy by Wesley Yin-Poole

Microsoft is working to improve the accuracy of Xbox 360 motion-sensing add-on Kinect so that it could detect finger movement and hand rotation, Eurogamer understands.

Microsoft’s Kinect team is said to be working “very hard” on a switching or compression technology that will allow a greater amount of data to pass through Kinect to the Xbox 360 console.

Kinect features are dictated by firmware so that they can be added and upgraded over time.

The depth sensor used by Kinect is also dictated by firmware – it is currently set at a 30 frames per second limit and a 320×240 resolution limit.

At a 640×480 resolution, however, Kinect could begin to detect fingers and hand rotation – an effective quadrupling of its accuracy.

The issue relates to the USB controller interface, Eurogamer was told. It is capable of around 35MB/s, but it only uses around 15/16MB/s.

This artificial limit is in place because multiple USB devices can be used at once on an Xbox 360. But Microsoft is working on a technology to allow greater throughput in this regard, Eurogamer understands.

If Microsoft achieves its goal it could double the spec of Kinect’s depth camera with a simple dashboard update.

Microsoft had not responded to Eurogamer’s request for comment before publication, but Digital Foundry’s Rich Leadbetter described the potential accuracy improvement as “eminently doable”.

Microsoft would need to “disable or lower throughput of game installs running from USB flash drives to free up additional bandwidth,” Leadbetter said.

“All eminently doable though bearing in mind that Kinect ‘only’ needs 20MB/s for full res from both cameras.

“The resolution coming out of the depth camera via PC is indeed 640×480, but it is uncertain just how accurate the camera’s sensor is.

“Additionally, processing four times as many depth pixels could slow things down more.”

Last month Anton Mikhailov, a software engineer at Sony Computer Entertainment America’s research and development department, told Eurogamer Sony turned down Kinect’s 3D camera because of the limitations of the tech.

“In reality, the 3D cameras we surveyed and what Kinect ended up using, they’re 320×240 resolution, so when you’re talking about tracking fingers, or even tracking things like the rotations of your hand, you’re working with 10×10 pixels,” Mikhailov said.

“It’s very hard to get anything useful out of it.”

Mikhailov doubted Kinect’s capacity to create a decent Star Wars game “because there are so many ambiguities, and it’s nearly impossible to track the angles of your wrists”.

If Microsoft achieves its goal of improving Kinect’s accuracy, however, Mikhailov could be proved wrong.

Japan’s low-cost space programme pushes the limits by Miwa Suzuki

Despite its shoestring budget, Japan’s space programme has boldly reached for the stars, pioneering solar-powered galactic travel, exploring a distant asteroid and planning a robot base on the Moon.

The past year has seen Japan’s agency JAXA chalk up several world firsts, including the safe return of a deep-space probe that picked up asteroid dust from a potato-shaped space rock on an epic seven-year odyssey.

The (Falcon) ended its five-billion-kilometre (three-billion-mile) mission when it burnt up on re-entry over the Australian outback.

Hayabusa had already safely parachuted to Earth a disk-shaped container with the particles inside.

Because asteroids are thought to date back to the dawn of our solar system, it is hoped the extra-terrestrial grains from asteroid Itokawa can help reveal secrets from as long as 4.6 billion years ago.

The Hayabusa mission — costing less than 200 yen (two dollars) per Japanese citizen over 10 years (20 billion yen) — has boosted interest in the space programme, and in science and technology, said project leader Junichiro Kawaguchi.

“Space development doesn’t foster industries directly but it can nurture people who will contribute to industries in the future,” he said. “It brought about an immensely bigger educational effect.”

Earlier this year the Aerospace Exploration Agency (JAXA) also stunned earthlings everywhere when it sent a “space yacht” floating through the black void, without leaving a hint of a carbon footprint.

The kite-shaped Ikaros — short for Interplanetary Kite-craft Accelerated by Radiation of the Sun — is propelled forward by sun particles bouncing off its fold-out wings, which are thinner than a human hair.

There have been set-backs too. Last week the Akatsuki (Dawn) probe narrowly missed its entry point to the orbit of Venus, where it had been due to observe the toxic atmosphere and blistering volcanic surface for two years.

Despite its shoestring budget, Japan's space programme has boldly reached for the stars

This photo, taken in June and released by the Japan Aerospace Exploration Agency (JAXA) shows JAXA personnel inspecting the capsule carried by the Japanese Hayabusa spacecraft after it parachuted back to land in the Woomera military zone in the Australian Outback. Despite its shoestring budget, Japan’s space programme boldly reached for the stars this past year.

Ground control put on a brave face after the mishap, vowing to try again when the probe and Venus have their next rendezvous in six years.

If Akatsuki makes it, it will get a close-up glimpse of what is often called our sister planet — similar in size and age to Earth but shrouded in sulphuric acid clouds and baking at 460 degrees Celsius (860 degrees Fahrenheit).

JAXA’s mission are far more ambitious than its budget would suggest.

The agency has no manned missions and operated on 339 billion yen (four billion dollars) this fiscal year — less than one-tenth of the NASA budget, and less than half the annual cost of Europe’s space programme.

Space officials are now fighting back against any further government belt tightening as they plan a follow-up probe to Hayabusa in 2014, which would explore an asteroid named 1999JU3.

JAXA says it hopes its probe would find “organic or hydrated materials” on the asteroid, and to find out whether “there is any relation to life on Earth”.

The science and technology minister, Yoshiaki Takagi, last month vowed that “we will strive to secure the budget so that we can offer maximum support” for the Hayabusa-2 project.

His ministry has requested a 100-fold boost to the research budget for Hayabusa-2 to some three billion yen next year.

Prime Minister Naoto Kan sounded sympathetic when he said last month that Japan “must be committed” to space projects.

In future the may take on an even more ambitious task.

Earlier this year, JAXA stunned earthlings everywhere when it sent a "space yacht" floating through the black void

This image released by the Japan Aerospace Exploration Agency (JAXA) in April, shows an artists impression of what the Japanese satellite Ikaros might look like in space. Despite its shoestring budget, Japan’s space programme has boldly reached for the stars, pioneering solar-powered galactic travel, exploring a distant asteroid and planning a robot base on the Moon.

An expert panel advising the minister for has called for sending a wheeled robot to the Moon in five years — having first considered a two-legged humanoid, which was rejected because of the Moon’s bumpy surface.

It envisions building the first lunar base by 2020, which could be staffed by advanced robots, as a key stepping stone for Japan’s space exploration, a field where Asian competition is heating up.

“It is extremely important to probe the Moon… as we now see the dawn of ‘the Age of Great Voyages’ in the solar system,” the panel said, pointing out that “China, India and other countries are aiming to probe the Moon.”

The government’s Strategic Headquarters for Space Policy believes a successful space programme does much to lift Japan’s profile on Earth.

“Our country’s space technology, its achievements and human resources are truly diplomatic resources that would boost our influence and position in the international community,” it said in a policy report.

“We will promote them as a source of our soft power.”

Capitalism’s Most Overlooked Benefit: Peace by Uri Friedman

In discussing capitalism’s attributes, argues J.T. Young at The American Spectator, we too often forget about peace:

Virtually all conflicts of the last century have been initiated by fettered market, authoritarian states. Often the world’s armed conflicts have been between two such regimes. Contrastingly, military conflicts have almost never pitted two capitalist, democratic nations against one another.

Socialist, communist, fascist, or simply non-ideological dictator-governed nations have almost always been the world’s aggressors. When capitalist democracies are drawn into armed conflict, it is almost always against such economically-fettered nations.

To prove his point, Young, who’s worked in the federal government and on Capitol Hill, points to the Korean Peninsula: “In South Korea, a capitalist market, open society, and democracy exist. In North Korea, a closed market, closed society, and totalitarian regime exist. You also have a stark distinction between peace and war.”

For capitalist countries, Young says, war is a terrible economic investment, since resources are deployed for unproductive rather than productive ends. War only becomes an option when the long-term costs of an enemy’s continued aggression outweigh the short-term costs of resisting it.

The economic calculation for non-capitalist nations is reversed, he continues. Since, by definition, their economies are not allocating resources optimally, conflict is actually the best economic investment.

Young adds that free markets are also likely to create free political systems whose checks and balances make it difficult for the government to go to war or remain at war for a protracted period of time, whereas the opposite is true in societies with “fettered markets.”

After citing the economists Milton Friedman and Frederick Hayek to support his claims, Young concludes: “capitalism is frequently credited with only the most prosaic of goals and ends in society. In fact, it is really the protector of society’s most sublime goals.”

Google supporting research and innovation in Europe’s universities by Maggie Johnson, Director of Education & University Relations and David Harper, Head of University Relations

As a company that started out in academia, we’ve always known that a lot of the world’s best computer scientists don’t work in the private sector (or in Silicon Valley, for that matter!) but in universities and research centres around the world.

Over the years, Google has invested in a large network of research and development centres around the globe, including 11 centres across Europe, Russia and Israel – and our newly announced centre in Paris. This diversity of engineering locations means that we’re able to create culturally diverse teams – and fun working environments. But they also enable us to stay closely in touch – and collaborate – with academics undertaking cutting-edge research at universities across Europe.

This week – building on an initiative we blogged about earlier this year – we announced nearly €3.7 million in research funding via our Focused Research Awards scheme. The grants are going to 14 universities and research centres in Switzerland, Germany, France, Italy and the United Kingdom.

The Focused Research Awards are unrestricted gifts that provide support for one to three years, and have been awarded to researchers in disciplines including software engineering, mathematical optimisation, information extraction and integration – and policy areas such as privacy. Recipients also get access to Google tools, technologies and expertise.

The list of research projects that have received focused research awards in Europe includes:

  • German Academy of Science and Technology (Acatech): User-centred Online Privacy, Henning Kagermann
  • Max Planck Institut Informatik, Germany: Robust and Scalable Fact Discovery from Web Sources, Gerhard Weikum, Martin Theobald, Rainer Gemulla
  • Saarland University, Germany: Test Amplification, Andreas Zeller, Gordon Fraser
  • EPFL, Switzerland: Automated Software Reliability Services, George Candea
  • CNRS, France and nine universities in France, Germany and Italy: Mathematical Optimization: Thorsten Koch (Zuse Institute of Berlin), Stefan Nickel (Karlsruhe Institute of Technology), Leena Suhl (University of Paderborn), Narendra Jussien (Ecole des Mines de Nantes), Pierre Bonami (CNRS/Université d’Aix/Marseille), Pierre Lopez (CNRS/LAAS in Toulouse), Denis Trystram (INP Grenoble), Safia Kedad-Sidhoum (LIP6 in Paris), Andrea Lodi (University of Bologna).
  • University of Cambridge, UK: Security-Oriented Analysis of Application Programs, Steven Hand, Robert Watson

Alongside our Focused Research Awards programme, we provide grants for more than 200 smaller research projects every year, with recent awards highlighted in our research blog. These awards typically provide partial funding for PhD students. Google also supports 40 computer science PhDs worldwide through our PhD Fellowship Programmes, and currently supports 14 students in Europe. We also host over 20 faculty members on sabbatical each year world-wide, enabling them to work with Google engineering and policy teams on special projects.

Our hope is that building close connections with universities and researchers will support innovation in Europe – and extend the research capabilities of both Google’s engineers and our colleagues in academia. You can find more information about all of our research programs on our University Relations site.

Business Analytics 101: Enterprise Fraud Management by Ellen Joyner, Global Fraud Prevention Marketing Manager, SAS

Recently I was listening to an NPR (National Public Radio) documentary about the history of cancer and medicine’s evolution in understanding and treating cancer. This was an amazing story. I would venture a guess that nearly everyone who is reading this post has been affected by cancer in one way or another. So you can imagine my surprise and amazement when I later read a fraud article about cancer. An elderly woman was faking cancer to play on the sympathies of people and cheat them out of their money. This is what makes me so passionate about sharing the importance of fighting fraud and beating fraudsters at their game. Fraudsters, like cancer, have many faces and are often in and out before you know it.

The cancer scam starts small as an individual crime against an empathetic donor. However, once successful, this fraud expands and is characteristic of fraud that costs organizations millions each year. Financial services firms, health care, insurance – fraud is a cross-industry pain. Gartner, a highly respected research firm, defines enterprise fraud and the solutions to mitigate fraud in their most recent report, Enterprise Fraud and Misuse Management Solutions: 2010 Critical Capabilities.

“Gartner defines EFM as software that supports the detection, analysis and management of fraud across users, accounts, products, processes and channels. It monitors and analyzes user activity and behavior at the application level, as opposed to the system, database or network level, and watches what transpires inside and across accounts using any channel available to a user.”

Business Analytics in the fight against fraud
In a recent report by the Fraud Management Institute, business analytics approaches that should and are being used to execute on an enterprise fraud strategy include data integration, fraud detection models, alert management and result evaluation. An effective enterprise fraud data warehouse needs to capture and integrate data from a wide variety of sources and aggregate the data related to all transactions so they are useful for real-time risk scoring.

For example, the State of Washington has investigators doing research on 12 to 25 different technology systems to get a more comprehensive view of a particular employer. Firms need to consider multiple analytical approaches to modeling fraud that extend beyond business rules and include anomaly detection, predictive models and social network analysis. The choice of which methods to use often depends on the particulars of the application and the institution. In general, there is a trend away from the use of business rules as the lone method for defining alerts.

Organizations are moving toward more responsive fraud alerts and a higher level of alert integration across business lines and various geographical regions. In some cases, the alerts are integrated into a single case management tool (see the SAS white paper Enterprise Case Management) . Finally, organizations should be evaluating results with key metrics that are agreed upon, documented and reviewed regularly. Key metrics can include historical expected losses per account, number of false positives and total exposure. The State of Washington has already estimated an 8 to 1 return on their investment in a workers compensation fraud prevention solution from SAS.

Time to Take a Fresh New Approach
As banking evolves into new business channels, these channels can pose new risks. The first concern is to know and authenticate customers so you know with whom you’re doing business. This is easier said than done, which is why a layered defense should be used. Analytics are critical to that defense. Use technology that can learn from complex data patterns, and use sophisticated decision models to better manage false positives. To learn more, read “Building a Better Banking World.”

Employing a layered hybrid approach for fraud detection helps the client by providing the ability to tailor approaches to specific clients. Included with this hybrid approach are canned analytics that are an important part of ensuring rapid ROI and deployment simplicity.

Many financial institution fraud experts know they need to do more, but as many companies have already poured money into simple solutions it is difficult to explain to executives where the institution stands and what it needs to do next. Many companies can’t even measure how successful their efforts are with the tools they already deploy. Use this simple five-level assessment framework to see get the conversation started.

The business case for using multiple analytical approaches across all organizational transactions will not only give you better monitoring of fraudulent activities, but more accurate behavior profiles that result in incremental detection and reduced false positive rates. This will keep your customers safe from financial harm and protect your financial institution’s reputation.

Closing Thoughts
I hope I have left you feeling more confident in the important role that business analytics plays in staying ahead of fraudsters and keeping them benign. Remember our lady who was running the cancer scam? With sophisticated analytics at work, the day-to-day including doctors’ visits, medicine refills and donation transactions could have been tracked and flagged as suspicious before many of the donation checks were cashed!

When these tracking measures are applied continuously, detective work becomes preventative.

Watch this short “YouTube” video to see how one of our key global customers is working to build a better banking world with less fraud, bigger profits, and more satisfied customers.