Memorial Day 2011 by Oliver North

When I was a kid, we called May 30 “Decoration Day.” It was an occasion for Boy Scouts to be up before dawn and report, in uniform, to the American Legion hall. There, Cub Scouts would be paired with older Boy Scouts, organized into detachments of a dozen or so and issued bags of small American flags. The groups then “deployed” in station wagons and pickup trucks to local cemeteries and churchyards, where we placed Old Glory on every veteran’s grave. Later in the morning, there was a parade down Main Street, led by a color guard, the high-school band and ranks of veterans from World War I, World War II and the war of the moment, Korea. The Veterans of Foreign Wars sold red poppies to raise funds for the disabled. Politicians made speeches, and citizens prayed in public. It was a solemn annual event that taught us reverence for those who served and sacrificed for our country. It’s no longer so.

Begun as a local observance in the aftermath of the Civil War, the first national commemoration took place May 30, 1868, at the direction of Gen. John A. Logan, commander of the Grand Army of the Republic. Though his General Order No. 11 specified “strewing with flowers or otherwise decorating the graves of comrades who died in defense of their country during the late rebellion” — meaning only Union soldiers — those who tended the burial sites at Arlington, Va., Gettysburg, Pa., and Vicksburg, Miss., decided on their own to decorate the biers of both Union and Confederate war dead.

For five decades, the holiday remained essentially unchanged. But in 1919, as the bodies of young Americans were being returned to the U.S. from the battlefields of World War I, May 30 became a truly national event. It persisted as such until 1971, during Vietnam — the war America wanted to forget — when the Uniform Monday Holiday Act passed by Congress went into effect and turned Memorial Day into a “three-day weekend.” Since then, it’s become an occasion for appliance, mattress and auto sales, picnics, barbecues and auto races. Thankfully, there are some places besides Arlington National Cemetery where Memorial Day still is observed as a time to honor America’s war dead. Here in Triangle, Va., the Marines do it right.

Like all Marine Corps installations, every major structure at Quantico is named for a fallen fellow warrior. On May 13, hundreds of Marines and their families gathered to dedicate a new staff noncommissioned officer academy, named in honor of Sgt. Kenneth Conde Jr. Our Fox News’ “War Stories” team was embedded with his unit, 2nd Battalion, 4th Marines, in Ramadi, Iraq, during April 2004. Shortly after Sgt. Conde was wounded in action during a gunfight with enemy insurgents, I asked him why he refused to be medically evacuated. His response: “There is no other choice for a sergeant in the Marine Corps. You have to lead your Marines.”

Cpl. Jared McKenzie, one of Conde’s Marines, said of his sergeant: “He always led from the front and never asked us to do something he wouldn’t do.” Sgt. Conde was awarded a Bronze Star and a Purple Heart for his valor and wounds in that engagement. On July 1, just eight days after his 23rd birthday, he was killed by an improvised explosive device.

At the dedication ceremony, Conde’s battalion commander, Col. Paul Kennedy, described the young sergeant as “a courageous, inspiring leader.” The fallen Marine’s father, Kenneth Conde Sr., said: “I’m wearing my son’s combat boots. Though they fit, I could never fill them.”

Just down the road from Conde Hall is another testament to how the Marines honor America’s heroes. Quantico National Cemetery occupies 725 beautifully landscaped acres donated by the Marines to the Veterans Administration in 1977. This final resting place for more than 28,000 Americans who served in every branch of our armed forces is closely linked to some of the most crucial events in U.S. military history. The fledgling Continental Navy prepared to battle the British fleet here in 1775-76. During the Civil War, it was a blockade point and subsequently a logistics base during the bloody battle for Fredericksburg. In 1918, the Marines established a training base and an air station for units deploying to fight in World War I. Since 1941, Quantico has been the home of the Marines’ Officer Candidates School and The Basic School for all Marine officers. Today it is also home to the FBI and DEA academies.

On Memorial Day, an “Avenue of Honor” through Quantico National Cemetery is adorned with American flags. A “Memorial Pathway” displays monuments to Edson’s Raiders of WWII fame and recipients of the Purple Heart; memorials to the 1st, 4th and 6th Marine divisions; and a monument erected to America’s veterans by the commonwealth of Virginia.

This is also the final resting place for a close friend — and a reminder of present-day peril. On Feb. 17, 1988, U.S. Marine Col. William “Rich” Higgins was kidnapped in Beirut by Iranian-supported Hezbollah terrorists. They murdered him in July 1990. His remains were interred here in 1991. Rich Higgins’ gravesite is my Memorial Day reminder that the streets of heaven really are guarded by U.S. Marines. So are the streets of America.

Fighting in Yemen Tilts Country Toward Civil War by Meredith Buel

Middle East analysts say recent fierce fighting in Yemen may tilt the country toward civil war. The instability increases the risk that a branch of al-Qaida based in Yemen could launch terrorist attacks on foreign soil.

For several months, mostly peaceful protestors, inspired by uprisings in Egypt and Tunisia, have been demanding political and economic change in Yemen.

But in recent days there has been fierce street fighting in the Yemeni capital, Sana’a, with heavy shelling and street battles between the Hashid clan’s tribal militias and pro-government forces.

Edmund J. Hull, a former U.S. Ambassador to Yemen, said, “I think absent some kind of concerted international effort to change the momentum, this is going towards increased violence and quite likely a civil war.”

The fighting began after President Ali Abdullah Saleh refused for a third time to sign a deal to transfer power and eventually step down after more than three decades of authoritarian rule.

Tribal chief Sadiq al-Ahmar has emerged as a prominent challenger of Saleh’s rule. Al-Ahmar leads a powerful tribal organization in an impoverished country where family loyalties are important.

Katherine Zimmerman, an expert on Yemen with the American Enterprise Institute, said, “Saleh’s refusal for the third time to sign a transition deal may have been the last straw for a lot of these groups who say to themselves that perhaps he is not willing to step down peacefully and that the only way to transfer power would be through force.”

American intelligence officials believe al-Qaida in the Arabian Peninsula, based in Yemen, is now the most significant terrorist threat to the United States.

Al-Qaida’s ability to operate in Yemen is expanding, while government efforts against the terrorist group are receding, as troops have been pulled into Sana’a to protect the presidential palace and other infrastructure.

Former Ambassador Hull, author of High Value Target – Countering al-Qaida in Yemen, said that makes the group even more dangerous.

“They have always had an intent to strike not only in Yemen but regionally, in Saudi Arabia, and against the U.S. homeland,” he said. “There has been no doubt about that intent.  What we are seeing now is an increased capability to act on that intent.”

Analysts say the unrest in Yemen is increasing the operating space for al-Qaida to plan and plot attacks.

Zimmerman said America needs to be on guard. “For the U.S., this puts the country at great risk and it also allows the al-Qaida operatives to operate very freely without as much surveillance.”

While Yemen’s future is unpredictable, analysts say the current turmoil is a boon for al-Qaida, which, they say, is an increasing threat to countries from the Middle East to the United States.

Gaining muscle from mussels: High-strength CNT composite fibres by Marie-Claire Hermant

The exceptional mechanical properties of carbon nanotubes (CNT) have heralded it as the next generation super-strength fibre material. Researchers’ imaginations have run wild with the idea that these nano-sized carbon structures might allow us to build the worlds’ strongest cables, fibres and fabrics. From space elevators to bulletproof vests, CNTs find an immense field of application. Moreover, such materials could have a secondary functionality due to the ballistic electron transport possible through CNTs.

Whilst a vast development in the preparation of high-strength CNT-based fibres has taken place, many fibres still do not exceed strengths that have been achieved for hydrocarbon-based materials such as Dyneema and Kevlar. The majority of CNT fibres are prepared via a spinning process, either from solution or solid-state. Spun fibres undergo a twisting and post-spinning densification process to improve the mechanical properties. It is at this point that researchers from the KAIST Institute in South Korea have altered the preparation strategy, introducing an infiltration step to introduce polymeric cross-linking molecules. Mussel-inspired catechol-containing adhesive polymers suspended in methanol are absorbed into the twisted CNT-cable and cross-linked in a subsequent step.

The adhesive polymers investigated mimic proteins found in the adhesive foot of the marine mussel Mytilus edulis. These molecules undergo cross-linking, even in a marine environment, through chemical reactions between the catechol and amine groups in the proteins, as well as metal-coordination. Similar cross-linking reactions were performed on infiltrated CNT-cables; first through heat treatments, and secondly through iron-catechol coordination.

The strong binding of individual CNTs and the mussel-inspired matrix is easily seen when the cables are fractured and examined under an electron microscope. Most importantly, the tensile strength (strength to failure) of the treated fibres is increased by 500% over the CNT fibre alone. Even though the strength of a single CNT still dwarfs those reported here, the improvements observed when using the mussel-inspired adhesives are still commendable, especially when the scalability of fibre preparation is considered. With further modification of the cross-linker molecules, even higher strengths are likely possible.

Ryu et al. Adv. Mat. 2011, 23, 1971 – 1975 ; DOI: 10.1002/adma.201004228

3-D printers may someday allow labs to create replacement human organs by Bonnie Berkowitz

The machine looks like the offspring of an Erector Set and an inkjet printer.

The “ink” feels like applesauce and looks like icing. As nozzles expel the pearly material, layer by layer, you imagine the elaborate designs this device could make on gingerbread cookies.

But the goo is made of living cells, and the machine is “printing” a new body part.

These machines — they’re called three-dimensional printers — work very much like ordinary desktop printers. But instead of just putting down ink on paper, they stack up layers of living material to make 3-D shapes. The technology has been around for almost two decades, providing a shortcut for dentists, jewelers, machinists and even chocolatiers who want to make custom pieces without having to create molds.

In the early 2000s, scientists and doctors saw the potential to use this technology to construct living tissue, maybe even human organs. They called it 3-D bioprinting, and it is a red-hot branch of the burgeoning field of tissue engineering.

In laboratories all over the world, experts in chemistry, biology, medicine and engineering are working on many paths toward an audacious goal: to print a functioning human liver, kidney or heart using a patient’s own cells.

That’s right — new organs, to go. If they succeed, donor waiting lists could become a thing of the past.

Tony Atala, director of the Wake Forest Institute for Regenerative Medicine in North Carolina, envisions what he calls “the Dell computer model,” where a surgeon could order up “this hard drive, with this much memory …,” only he or she would be talking about specs for living tissue rather than electronics.

Bioprinting technology is years and possibly decades from producing such complex organs, but scientists have already printed skin and vertebral disks (the soft tissue that grows in the spine between the vertebrae) and put them into living bodies. So far, none of those bodies have been human, but a few types of printed replacement parts could be ready for human trials in two to five years.

“The possibilities for this kind of technology are limitless,” said Lawrence Bonassar, whose lab at Cornell University has printed vertebral tissue that tested well in mice. “Everyone has a mother or brother or uncle, aunt, grandmother who needs a meniscus or a kidney or whatever, and they want it tomorrow. … The promise is exciting.” But he warns that nothing is likely to be ready in time to help people who already need an organ. “The goal is not to squash that excitement, but to temper it with the reality of what the process is.”

The reality for now is that making such things as vertebral disks and knee cartilage, which largely just cushion bones, is far easier than constructing a complicated organ that filters waste, pumps blood or otherwise keeps a body alive.

Scientists say the biggest technical challenge is not making the organ itself, but replicating its intricate internal network of blood vessels, which nourishes it and provides it with oxygen.

Many tissue engineers believe the best bet for now may be printing only an organ’s largest connector vessels and giving those vessels’ cells time, space and the ideal environment in which to build the rest themselves; after that, the organ could be implanted.

The cells, after all, have been functioning within the body already in some capacity, either as part of the tissue that is being replaced or as stem cells in fat or bone marrow. (Donor stem cells could be used, but ideally cells would come directly from the patient.)

“The cells are actually the tissue engineers, so the people that do the work are just cheerleaders,” said Rocky Tuan, director of the Center for Cellular and Molecular Engineering at the University of Pittsburgh. “When we do tissue engineering, we are accelerating what the cells normally do. I tell people it’s assisted living, because we help the cells. We build all the houses and everything, and then we say, ‘Cells, come in and do your thing.’ ” If the cells do their thing correctly, the organ lives and grows just as the original once did.

Another huge challenge is common to much new research: lack of money.

“If the federal government created a ‘human organ project’ and wanted to make the kidney, I literally think it could happen in 10 years,” said chemical engineer Keith Murphy, co-founder of Organovo, a firm that makes and works with high-end bioprinters. But that would require a massive commitment of people, resources and billions of dollars, he said.

Once scientists get over the financial and technical hurdles of bioprinting, they will have to square the process with the Food and Drug Administration, which will have to decide how to regulate something that is not simply a device, a biological product or a drug, but potentially all three.

Before printed organs are implanted into people, bioprinting may be used in other ways. Murphy’s group is working on a project that will replicate tissue for testing the effects of medications, particularly cancer drugs. This could eliminate some of the drawn-out, trial-and-error process of trying a series of drugs on a person before finding one that works.

While a complex organ would be the holy grail for most tissue engineers, some like to look even farther ahead, straight into science fiction.

“If one can bioprint functional human organ constructs, then bioprinting a whole human — or whatever will be the name for such a creature — is just a logical extension,” said Vladimir Mironov, a pioneer in the field who is working with computer companies to design better bioprinting software.

Others don’t know why anyone would want to do that.

“It’s a visionary idea,” said Mironov’s colleague Jonathan Butcher of Cornell, whose lab is working on printing heart valves. “But the usual method of human reproduction works pretty well.”

Discovery opens the door to electricity from microbes (A UAE product story)

Using bacteria to generate energy is a significant step closer following a breakthrough discovery by scientists at the University of East Anglia.

Published today by the leading scientific journal Proceedings of the National Academy of Sciences (PNAS), the research demonstrates for the first time the exact molecular structure of the proteins which enable bacterial cells to transfer electrical charge.

The discovery means scientists can now start developing ways to ‘tether’ bacteria directly to electrodes – creating efficient microbial fuel cells or ‘bio-batteries’. The advance could also hasten the development of microbe-based agents that can clean up oil or uranium pollution, and fuel cells powered by human or animal waste.

“This is an exciting advance in our understanding of how some bacterial species move electrons from the inside to the outside of a cell,” said Dr Tom Clarke of UEA’s School of Biological Sciences.

“Identifying the precise molecular structure of the key proteins involved in this process is a crucial step towards tapping into microbes as a viable future source of electricity.”

Funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and the US Department of Energy, the project is led by Dr Clarke, Prof David Richardson and Prof Julea Butt of UEA, in collaboration with colleagues at the Pacific Northwest National Laboratory in the US.

In earlier research published by PNAS in 2009, the team demonstrated the mechanism by which bacteria survive in oxygen-free environments by constructing electrical wires that extend through the cell wall and make contact with a mineral – a process called iron respiration or ‘breathing rocks’. (See http://www.uea.ac.uk/bio/news/rocknews)

In this latest research, the scientists used a technique called x-ray crystallography to reveal the molecular structure of the proteins attached to the surface of a Shewanella oneidensis cell through which electrons are transferred.

‘Structure of a bacterial cell surface deca-heme electron conduit’ by T Clarke (UEA), M Edwards (UEA), A Gates (UEA), A Hall (UEA), G White (UEA), J Bradley (UEA), C Reardon (PNNL), L Shi (PNNL), A Beliaev (PNNL), M Marshall (PNNL), Z Wang (PNNL), N Watmough (UEA), J Fredrickson (PNNL), J Zachara (PNNL), J Butt (UEA) and D Richardson (UEA) is published in the online Early Edition of the Proceedings of the National Academy of Sciences on May 23 2011.

‘Octopus’ laser improves understanding of processes behind cancer (A BBSRC product story)

A breakthrough in understanding a biological process that causes many common cancers including lung and breast cancer opens up the possibility of new treatments. The results are featured on the front cover of the journal Molecular and Cellular Biology published on 12 May 2011.

Experts funded by the Biotechnology and Biological Sciences Research Council (BBSRC) from STFC’s Central Laser Facility (CLF) and Computational Science and Engineering Department (CSED) have solved a puzzle that has confounded scientists for more than 30 years.

The researchers have discovered a previously unknown molecular shape which is partly responsible for transmitting the signals that instruct cells within the body when to grow and divide. It is the uncontrolled growth of cells that causes cancer to spread through the body. Until now, not enough was known about how these molecules, known as epidermal growth factor receptors (EGFRs), transmit messages in the development of cancer. This means drugs designed to stop them transmitting these cancer-inducing signals have also been limited in their effectiveness.

Project leader Dr Marisa Martin-Fernandez, a CLF scientist based at the Research Complex at Harwell (RCaH), says: “A number of drugs aim to limit EGFRs’ role in spreading cancer but because human EGFRs haven’t been well understood, the drugs are designed simply to block every signal they transmit. But the human body is good at compensating for losses of function so it finds ways of bypassing blocked receptors to allow cancerous cells to grow again. Unfortunately the current drugs therefore all too often only provide temporary remission.

“Our breakthrough will provide a better platform of knowledge on structure variation of EGFRs in vivo. Potentially this enables the pharmaceutical industry to develop drugs that target EGFRs’ cancer-related functions more specifically but also allow the receptors to go on performing other tasks. This makes it less likely that the body will try to compensate for total loss of function.”

Peter Parker is the Principal Investigator at King’s College London on this work. Dr George Santis, also from King’s College London, is a consultant in respiratory medicine and will help take this work forward. He said: “Translating knowledge derived from scientific research into successful clinical therapies is exemplified by EGFR and its dysregulation in cancer. The use of new biologicals that inhibit EGFR has proved transformational in managing solid tumours particularly lung cancer where conventional anti-cancer treatment reached a plateau. There is however still much we don’t understand regarding EGFR and its role in malignancy; this breakthrough provides the foundation for novel ways to assess EGFR in cells and tissues that may lead to new insights on how to target EGFR to treat human cancers”.

The team has also shown that this shape shares key features with the better understood EGFR molecules in fruit flies, providing clues on how EGFRs have changed during evolution.

Dr Martyn Winn of the CSED at STFC’s Daresbury Laboratory says: “The key has been close collaboration between the experimental and computational teams involved. The CLF used its OCTOPUS facility to take nanoscale measurements of EGFRs in cells. We took the measurements and used high performance computing (HPC) to calculate the receptors’ high-resolution structure, allowing us to determine their similarities with the fruit fly EGFRs.”

Professor John Collier, Director of the CLF, says: “Breakthroughs like this have the potential to really pay dividends in terms of saving lives and maximising the value of healthcare expenditure. By constantly pushing forward the boundaries of what laser technology can do, we can deliver real-world benefits that tangibly improve people’s lives.”

Details of the breakthrough are presented in the paper ‘Human EGFR aligned on the plasma membrane adopts key features of Drosophila* EGFR asymmetry’.

McGowan Institute for Regenerative Medicine and The Bioreactor Group: National Geographic Society coverage on “The Skin Gun”

An overview of the total therapy can be found here.

Read more about this clinical therapy here.

McGowan Institute for Regenerative Medicine and information on regenerative medicine help for soldiers.

Learn more about University of Pittsburgh and UPMC | McGowan Institute for Regenerative Medicine and The Bioreactor Group.

Solar Nantenna Electromagnetic Colloectors by D. K. Kotter and S. D. Novack Idaho National Laboratory, 2025 Fremont Avenue, Idaho Falls, ID 83415 W. D. Slafer MicroContinuum, Inc., 57 Smith Place, Cambridge, MA 02138 P. J. Pinhero Department of Chemical Engineering, University of Missouri, Columbia, MO 65211

The research described in these papers explores new and efficient approach for producing electricity from the abundant energy of the sun that captures up to 95 percent of light energy, using nanoantenna (nantenna) electromagnetic collectors (NECs). NEC devices target mid infrared wavelengths, where conventional photovoltaic (PV) solar cells are inefficient and where there is an abundance of solar energy. The initial concept of designing NECs was based on scaling of radio frequency antenna theory to the infrared and visible regions. This approach initially proved unsuccessful because the optical behavior of materials in the terahertz (THz) region was overlooked and, in addition, economical nano fabrication methods were not previously available to produce the optical antenna elements. This paper demonstrates progress in addressing significant technological barriers including: (1) development of frequency-dependent modeling of double-feed point square spiral nantenna elements, (2) selection of materials with proper THz properties, and (3) development of novel manufacturing methods that could potentially enable economical large-scale manufacturing. We have shown that nantennas can collect infrared energy and induce THz currents and we have also developed cost-effective proof-of-concept fabrication techniques for the large-scale manufacture of simple square-loop nantenna arrays. Future work is planned to embed rectifiers into the double-feed point antenna structures. This work represents an important first step toward the ultimate realization of a low-cost devices that will collect as well as convert this radiation into electricity. This could lead to a broadband, high conversion efficiency low-cost solution to complement conventional PV devices.

Solar Nantenna Electromagnetic Collectors

File Format: PDF/Adobe Acrobat
http://www.inl.gov/technicalpublications/Documents/3992778.pdf

Theory and Manufacturing Processes of Solar Nanoantenna Electromagnetic Collectors

J. Sol. Energy Eng.  — February 2010 —  Volume 132,  Issue 1, 011014 (9 pages)
doi:10.1115/1.4000577

lightRadio™ Technology Overview: Steve Kemp, Senior Product Marketing Director, Alcatel-Lucent; Tom Gruba, Senior Product Marketing Director, Alcatel-Lucent

lightRadio, a new wireless networking paradigm, brings service providers innovations that improve capacity, coverage and performance just when they are needed most.

Highlights

  • lightRadio supports current and anticipated wireless technologies to address growth and quality challenges
  • lightRadio combines innovations in antennas, radios and baseband processing with support for virtualization, cloud principles and architectural flexibility
  • lightRadio allows easy reconfiguration and software reprogramming of network elements

Innovating to address service provider challenges
To meet the skyrocketing demand for bandwidth, wireless service providers face a number of challenges that make today’s networks economically unsustainable including:

  • Adding more towers, antennas, radios and processing capacity
  • Supporting new technologies
  • Increasing spectral bandwidth
  • Making better use of cell site capacity

Based on Alcatel-Lucent Bell Labs innovations, lightRadio is designed to optimize total network costs over time and to make the most of each wireless service provider’s existing assets and capabilities. Figure 1 illustrates the lightRadio architecture, including:

  • The main components: Antennas, radios, baseband units, controllers and management
  • Two different wireless scale points: Conventional macro cells and smaller metro cells
  • Three different baseband processing configurations: At the base of the tower (conventional baseband units), in the radio head (all-in-one) and centralized, pooled baseband processing (in the cloud)
Figure 1: The lightRadio architecture combines innovations to address service provider challenges

Making antennas smaller, smarter, stronger
Antennas have a significant impact on consistency of coverage and capacity. Historically, service providers added passive transmit and receive antennas for each radio and technology when they needed to improve these aspects.

With lightRadio antennas, this is no longer necessary. lightRadio uses smart active antenna arrays (AAAs) that deliver multiple-input multiple-output (MIMO) gains and sophisticated beamforming in a very small footprint. With these capabilities, radio frequency (RF) energy can be dynamically directed exactly where it is needed based on changes in cell loading and traffic density. Figure 2 illustrates one of the innovative AAA designs used in lightRadio.

Figure 2: lightRadio features active antenna array with transceivers behind each antenna element

The lightRadio AAAs can:

  • Improve capacity up to 30 percent with vertical beamforming
  • Lower power consumption by improving coverage
  • Improve antenna robustness by allowing the array to be reconfigured to reduce the impact of individual element failures

The lightRadio architecture also supports conventional passive antennas. Together with centralized baseband processing, these antenna solutions support advanced inter-cell interference coordination (ICIC) schemes between neighboring cells. This significantly improves signal-to-noise ratios.

Making radios multi-band, multi-purpose
Deploying capacity in new spectral bands usually means buying expensive new spectrum and equipment. If the new band is lower in frequency than existing bands, coverage will improve because lower frequency bands have better propagation and cell reach. If the new band is higher in frequency, its reach will be more limited. This can create coverage holes between base stations, particularly when cell sites were chosen based on a lower frequency spectral band. Coverage holes can decrease quality of experience (QoE) and are expensive to fill.

lightRadio uses wideband radios that can operate across multiple spectral bands. Service providers no longer have to deploy new radios to support new bands. These wideband radios can also be incorporated into a smaller number of lightRadio remote radio head (RRHs) to support ongoing capacity increases. As a result, “light radios” dramatically reduce capital expenditures in multi-band deployments. They also help service providers deal with tower loading issues.

Today, the size, weight, wind loading, visual appearance and leasing costs of cell towers have become blocking issues for the evolution of radio networks. At most cell sites, radios are on the tower in an RRH configuration. Macro cell sites are typically divided into three sectors with a separate RRH required for each frequency band. Because some service providers have sites with five different frequency bands in three sectors, 15 RRHs on a cell tower is not uncommon.

To make matters worse, multiple service providers often share a tower that is leased from a third-party provider. New antenna configurations, such as 4 x 4 MIMO (four transmit antennas and four receive antennas) further increase complexity. As illustrated on the right side of Figure 3, lightRadio significantly reduces this problem.

Figure 3: lightRadio significantly reduces tower loading

Making baseband processing more efficient, effective, economical
As wireless service providers look to increase capacity and improve network economics, they may deploy Long Term Evolution (LTE) in a new frequency band, such as 700 MHz or 2.6 GHz, in a new antenna configuration such as 4 x 4 MIMO. They may also take advantage of new technologies such as LTE-Advanced.

LTE-Advanced provides carrier aggregation — “bonding” together separate frequency bands — and sophisticated methods for coordinating multiple base stations. These methods include coordinated multipoint transmission (CoMP) and dynamic ICIC. CoMP increases spectral efficiency and improves end-user performance. When engineered to take advantage of centralized baseband processing, CoMP and ICIC technologies can significantly increase network efficiency.

lightRadio supports both current and anticipated wireless technologies. In a lightRadio architecture, the baseband module (whether centralized or remote) can be dynamically reprogrammed to support multiple combinations of Wideband Code Division Multiple Access (W-CDMA) and LTE technologies and their evolution. That means a wireless service provider could start with a baseband unit that is fully W-CDMA and gradually reconfigure its software as needs change until the same hardware is fully used for LTE. Remote software configuration reduces time and costs as service providers evolve to support new technologies.

lightRadio also allows wireless service providers to seamlessly migrate baseband processing from a remote site to a centralized baseband processing pool. Digital modules from the remote baseband unit can be redeployed in the centralized processing site. Alternatively, they can be configured to operate as a coordinated pool of resources at the remote site.

lightRadio supports two baseband processing options with different backhaul requirements:

  • Processing baseband signals on the remote cell site in a baseband unit (BBU) at the base of the cell tower or integrated with the radio head (all-in-one BTS). This option requires backhauling of asymmetric, latency-insensitive, relatively low bit-rate streams of native IP traffic. We call this method “IP backhaul.” IP backhaul transport is supported over copper, microwave and fiber infrastructures.
  • Processing baseband signals in a central location as part of a pool of resources “in the cloud.” This option requires backhauling of radio signal samples which are symmetric, latency sensitive and typically high bit rate. We call this method “CPRI interconnect,” referring to the Common Public Radio Interface (CPRI) specification typically used to transport these signals.
  • CPRI interconnect requires point-to-point fiber links or a wavelength division multiplexing (WDM) passive optical network (PON) with 10 Gb/s per wavelength. However, it also benefits from Bell Labs’ compressed I/Q transport. Used with LTE, the compressed I/Q algorithms reduce bandwidth requirements by a factor of 3 compared to uncompressed I/Q transmission.

This architectural flexibility allows the widest deployment and reuse of existing infrastructure, using a combination of IP backhauling and CPRI interconnect. It also helps to reduce total cost of ownership (TCO) and accelerates deployments.

Increasing capacity
Increasing capacity usually means increasing the number of carriers (the W-CDMA method) or improving the spectral bandwidth for each carrier (the LTE method). If spectral increases are in the same band, the same equipment and radio technology can often be reused. If spectral increases are in a new frequency band, new antennas, radio and baseband equipment are required.

Because capacity needs to match multiple factors — demand, device populations and related usage intensity — service providers typically need a combination of existing and emerging technologies. lightRadio lets service providers effectively deploy a solution that matches user demand and is optimized from the antenna to the baseband processing and controller elements.

In a lightRadio architecture, baseband digital processing modules are built with new System on a Chip (SoC) technology. The SoC technology:

  • Incorporates previously discrete, technology-specific components into a single device that offers high performance at low costs and is technology-agnostic.
  • Can be remotely reprogrammed to support new features and even new radio technologies. This means that when W-CDMA customers shift to new LTE-based devices, the baseband module that has been serving them can be remotely reprogrammed as an LTE baseband module.

When capacity expansions are in new spectral bands, it may mean service providers have to acquire spectrum that covers a much broader area than the locations where demand has peaked. For example, a dense urban site might need four carriers or more for W-CDMA. But this is probably much more than a rural location need.

In contrast, increasing a cell site’s density lets wireless service providers use existing spectrum assets to increase the effective serving capacity. This is the key reason for deploying smaller macro cells and metro cells, sometimes called “pico cells.”

lightRadio supports two approaches for macro cells:

  • Putting all baseband processing into a single multi-technology base station
  • Transporting baseband radio signals to a centralized location that houses the baseband processing equipment

These approaches are complementary. The choice depends on availability of backhaul bandwidth, elasticity of demand patterns and operational costs.

Outside of urban areas, backhauling CPRI signals to a central location may not be cost effective. In these cases, a more traditional BBU is a better fit and cell sites are often served by microwave. Considering the size of wireless networks and the number of infrastructure configurations, no service provider wants to change the available features when a user moves between cells or locations. Nor do they want to invest in network assets that have a limited effective lifetime.

To help service providers address this challenge, the lightRadio product family uses common digital baseband components across different products and radio technologies. This gives service providers consistent functionality and reusable common hardware and software components.

Deploying smaller cells, or “metro cells,” at cost-effective locations offers another way to increase cell site density. Metro cells do not typically provide 360-degree contiguous coverage. Instead, they augment a macro network in “hot spots” that have high traffic density. In the lightRadio paradigm, metro cells are built on the same SoC technology as the macro cells and use the same backhaul resources.

Anticipating the need for coherent metro cell and macro cell optimization, Alcatel-Lucent extends the concept of self-optimizing networks (SONs). A new layer of understanding, called “wireless IP intelligence,” helps service providers respond to rapidly changing demand patterns. It optimizes the entire network, including the RAN, packet core and both licensed and unlicensed spectrum assets such as Wi-Fi® access points.

Stay tuned: lightRadio technical details to follow
This technology overview offers just a glimpse into how lightRadio addresses service provider challenges. Watch for additional articles that take a closer look at the benefits of key technical advances, such as the wideband active antenna array architecture. We’ll explore the factors that make these antennas the ideal evolution of low footprint RF hardware for base stations and a major step forward in beam-shaping flexibility.

To contact the authors or request additional information, please send an e-mail to techzine.editor@alcatel-lucent.com.

God shall provide: Towers go to church by Marc Speir

A little extra change never hurts — especially during a recession — and the members of Our Saviour’s Lutheran Church agree.

The aging population and the lackluster economy has affected the inner-city church in north-central Phoenix, as it has so many other nonprofits, and when the spiritual center was made an offer from on high, the congregation welcomed it.

Consultants from T-Mobile USA Inc. approached the church in 2008 and laid out a contract to erect a 55-foot monopole cell tower on the property it would lease to help bolster coverage in a “dark spot” of service the carrier had incurred.

While weekly collections, pledges and a lease to a child-care center on property also help out Our Saviour’s, Garth Andrews, a long-time member of the church and negotiator of the arrangement, said additional revenue is always a good thing.

“We are by no means ready to close the doors,” said Andrews. “But we do watch our pennies pretty closely.”

Andrews spent several meetings through a three-month period hammering out a deal that beats the likes of fundraising auctions and dinner theatre.

T-Mobile ended up backing off the construction for two years before deciding to revisit the arrangement in the spring of 2010. Construction took about a month after the decision to return and the tower went live in late February 2011, with T-Mobile expanding cabinet space for its HSPA upgrade.

“They came back and said, ‘we now have funding, let’s start it over again and get back to the permitting process,’” said Andrews. “This time, they were able to build the tower.”

To keep with fine aesthetics, the tower was converted to resemble a tall palm tree and is situated on the church lawn facing a major thoroughfare at 1212 E. Glendale Ave.

Our Saviour’s isn’t the only church in town to pull profits from telecoms, with others such as La Casa de Cristo Lutheran in north Scottsdale gaining and housing six towers on church grounds through the last dozen years. La Casa de Cristo has significantly more property, and blends in the exposed towers by placing them around the church’s baseball field with floodlights attached.

More often than not, churches like to disguise tower eyesores as steeples, bell towers, cacti and various kinds of trees, complete with woodpecker holes and surrounding foliage.

Money to religious organizations can be channeled to support utilities, staff salaries, repair buildings, fund missions and continue services to the destitute while operators are able to enjoy space on church lands, which are often closer in proximity to residences than commercial properties. The deals are usually seen as win-win, and churches are getting involved locally in Arizona, and nationwide.

“There was a hole here in T-Mobile’s coverage,” said Andrews. “There is not a hole on this property on Verizon’s coverage, for example, or in AT&T’s. They have a hole down the street and they’ve got a tower going up near that church.”

Due to contractual obligations, churches don’t often get to share information about how much they make in leasing towers to wireless providers, but industry contacts through RCR Wireless News suggest that the profits can range around $750 to $1,300 a month per tower, depending on the tower, for such a deal.

The data tsunami taking place in wireless devices places critical demand for fast growth on carriers, which are often keeping their eyes peeled for a way to plug up “dead zones” in coverage. It’s estimated by industry analysts that large churches constitute approximately 10% of property owners that negotiate cell tower leases.

Most nationwide zoning codes stipulate that towers can’t be built in a residential part of town. Placing towers in areas where people live and don’t use landline phones can be tricky, so leasing out to churches, which are usually zoned as residential but generally have more property and access to neighborhoods, can serve an important function. Commercial or industrial areas are also less likely to have open land space to build upon, and there needs to be a 150-foot buffer between the nearest lot line in a residential zoned structure.

“A conditional use permit was required to build anything other than church property,” Andrews said. “T-Mobile … just to make sure, sent circulators out with a petition and more than 50% of the neighborhood said, ‘it’s fine with us.”’

While carriers store information about hidden towers, the Federal Communications Commission doesn’t track disguised towers, nor do cities or the Arizona Corporation Commission, a regulatory agency charged with collecting data on landlines but not wireless builds.

It’s a good bet that churches everywhere would be turned onto the idea of leasing land, but carriers typically only approach those places that are needed to extend service. If approached, churches often don’t know what kind of negotiating power their organizations have, as it’s likely that other options carriers were considering had been exhausted.

“We did look into the going rate,” said Andrews. “It varies on the size of the tower, how many antennas are placed in the tower, so ours was on the lower end of the scale.”

Concessions are often made at the point a carrier is willing to barter with a church, including the look of a tower and amount of a lease, with carriers usually willing to accommodate any worries.

“Anytime I had a concern, (T-Mobile) was on it,” said Andrews. “We had an irrigation problem with them at one point but they sent someone out the same day, so we’ve been happy.”

Alcatel-Lucent on way to startling turnaround by Peter Burrows and Matthew Campbell

When Adolfo Hernandez joined Alcatel-Lucent in late 2008, the networking-equipment maker was almost comically dysfunctional.

In one of his first meetings as the company’s president of Europe, Africa and the Middle East, a customer started screaming at him before he even got to his chair. At an all-hands gathering at an Eastern Europe facility, employees threw fruit and vegetables at executives announcing another round of restructuring.

Less than three years later – and after a decade of losses, downsizing, and one very large, messy merger – Alcatel-Lucent has become one of the most startling turnarounds in tech.

The Paris-based company is gaining share in markets such as routers and superfast 4G networks, where it has won billion-dollar contracts with Verizon Wireless and Sprint Nextel. Telecom executives are buzzing about a new Rubik’s Cube-size gizmo called lightRadio that can extend wireless coverage without need of massive, energy-hogging cell towers. The stock has jumped 117 percent this year, to $6.40.

If the company proves its performance is no fluke when it announces quarterly earnings Thursday, the stock could more than double over time as oft-burned investors come back, says Tim Savageaux, an analyst with Terrapin Research.

“The company has become super-relevant to its customers, but it’s still almost completely irrelevant to the stock market,” he says. “That’s usually a good buying opportunity.”

Integrating the firm

At the time of the merger in 2006, Alcatel was a sleepy French phone-equipment giant and Lucent, an AT&T spin-off that includes the famed Bell Labs, was a near-bankrupt former darling of Wall Street. Instead of synergies, the deal resulted in endless “passport wars” among overlapping sales and product groups.

“There used to be a French part and a U.S. part, but Ben really integrated the company,” says Eelco Blok, CEO of Dutch carrier Royal KPN.

Verwaayen, 59, was well aware of Alcatel-Lucent’s problems. He was an executive at Lucent in its salad days, and later, as CEO of BT, a major customer.

And yet he was still surprised at the depth of the chaos. On his first day on the job, he got an e-mail asking him to sign off on the hiring of a secretary for a Poland office, after 16 executives had already done so.

He forwarded the e-mail to all employees, saying, “This ends now.”

Verwaayen made some larger decisions that had been put off too long. The company stopped hedging and bet correctly that a 4G wireless technology called LTE would emerge as the global standard instead of rival WiMax. Alcatel-Lucent makes wireless equipment found in cell towers and optical-networking gear – and those businesses are poised for growth as telecoms roll out 4G networks.

He also demanded that executives come up with a strategy that spanned Alcatel’s varied product lines.

Control over network

The process was led by Basil Alwan, a Silicon Valley executive who had managed the company’s successful assault on the router business. Though routers are a basic building block of any network, Alwan’s unit had been left alone by Alcatel-Lucent’s warring tribes before Verwaayen joined.

Since mid-2009, the routers have been integrated with other gear, letting carriers simplify and more easily manage their networks.

Alcatel-Lucent’s share of routers sold to carriers grew from 12 percent in 2008 to 16 percent in 2010, and in the fourth quarter it passed Juniper Networks to become the No. 2 player after Cisco Systems, according to research firm Infonetics.

In the past, carriers typically dealt with problems separately by, for example, stringing high-speed broadband lines to homes, or boosting cellular coverage. Now that consumers expect to get their YouTube videos whether they’re watching TV, using a laptop, or pecking at their iPhone, wired and wireless are no longer different worlds. Alcatel-Lucent’s position in both realms make it a contender for most contracts, says KPN’s Blok.

“They’ll always be on the short list,” he says.

Verwaayen’s fiercest rival now may be Huawei, the networking giant in Shenzhen, China. Huawei has won business from some big Western carriers, but has yet to crack the U.S. market where security concerns favor domestic and European suppliers such as Alcatel-Lucent.

One remaining concern about Alcatel- Lucent is that it may be too dependent on demand from U.S. carriers.

At least for now, there’s little sign of fear about the threat from China. In part, that’s because of a field trip Verwaayen and 30 Alcatel executives took to Shenzhen in 2010 at the invitation of Huawei founder Ren Zhengfei.

Gains in Latin America

The contingent came back from the two-day event emboldened; while they might never match Huawei on price, Huawei didn’t have a Bell Labs.

While he hasn’t faced off against Huawei much in the United States, Alcatel-Lucent Americas chief Robert Vrij says he’s scoring wins in Latin America.

“We’re kicking some serious you-know-what,” he told a packed room of smiling staffers at the company’s Murray Hill, N.J., research lab in April. Maybe it was the bonuses he’d just announced or the rising stock price, but nobody threw tomatoes.

AT&T, T-Mobile again tout nearly ubiquitous LTE coverage, but when? by Lynnette Luna

Wireless operator executives descended on Capitol Hill to defend or argue against AT&T’s (NYSE:T) proposed $39-billion acquisition of T-Mobile USA.

During his testimony before the Senate Judiciary Committee’s subcommittee on antitrust, competition policy and consumer rights, AT&T CEO Randall Stephenson reiterated one of the company’s key justifications for the acquisition: That it would allow AT&T to deliver LTE service to 97 percent of Americans, covering an additional 55 million more Americans than AT&T’s current LTE plans. That message, of course, was echoed by T-Mobile USA CEO Phillipp Humm.

“T-Mobile does not have sufficient spectrum to roll out a competitive LTE network while also continuing to support its existing GSM and HSPA+ networks,” Humm said in his testimony. “By combining the spectrum of both companies, the entity will be able to support LTE and the two legacy technologies, GSM and HSPA+. It will allow LTE to reach more than 97 percent of the U.S. population, as stated by AT&T, which is something T-Mobile would not have been able to do on its own.”

Stephenson said the LTE benefits fit with President Obama’s goal of extended high-speed mobile broadband to 98 percent of all Americans within five years. “This is a private market solution to address a public policy objective,” he said, adding that AT&T will not need to use Universal Service Fund money to reach its LTE buildout targets.

While Stephenson plays the rural broadband card, I’d like to know some specifics. Just what sort of time table is the company committed to when it comes to deploying expensive LTE services in markets that have lower population densities? The whole reason operators have not deployed services to rural areas in the past is because doing so hinders an operator’s ability to make money. Operators, after all, have shareholders to answer to.

LightSquared found favor with the FCC early on with its plan to use satellite and terrestrial spectrum to build a wholesale LTE network by promising to offer LTE coverage to 92 percent of the population by 2015 and meeting certain aggressive buildout thresholds.

If regulators are going to approve this deal, they need to hold AT&T’s feet to the fire when it comes to rolling out LTE to the majority of the country by stipulating rollout, coverage and data speed requirements that are more aggressive than its 700 MHz license and AWS license stipulations. AT&T’s 700 MHz licenses sit in the lower C and B Blocks (which lie in band class 17).

In general, 700 MHz licenses need to cover between 35 percent and 40 percent of their license territory (depending on the license held) within four years of receiving the license and 70 percent to 75 percent of the territory within 10 years. That really doesn’t guarantee a viable broadband offering for many years to 97 percent of the population. In the AWS band, where both operator hold spectrum, AWS licensees must make a showing of “substantial service” in their license area by 2025.

WiMAX, LTE and HSPA+: Comparing operators’ 4G coverage areas by Mike Dano

There’s been plenty of noise recently over the launch of 4G in all its colors: LTE, WiMAX and (according to some) HSPA+. Indeed, the definition of 4G continues to be somewhat of a moving target–PCMag.com recently reported that AT&T Mobility (NYSE:T) now defines HSPA 14.4 Mbps, with enhanced backhaul, as a 4G offering. Previously the carrier had indicated that its HSPA+ 21 Mbps network and enhanced backhaul offered 4G speeds.

Obviously, a number of factors play into the actual speeds that wireless users will get: distance from the cell site, backhaul, radio access network technology, spectrum usage and even the receiving handset can all affect the speeds obtained by the user. For example, MetroPCS (NYSE:PCS) last year launched switched on the nation’s first major LTE network deployment; however, the narrow slivers of spectrum that the carrier allocated to its LTE buildout mean that users don’t get speeds anywhere near the 5-12 Mbps that Verizon Wireless (NYSE:VZ) promises its LTE users.

Nonetheless, the race to 4G is a major event in the U.S. wireless industry, and much of the real work involves the rollout of faster radio access networks. Clearwire (NASDAQ:CLWR), Verizon Wireless and others have doled out billions of dollars to install new, faster base stations and antennas in order to speed up users’ data transmissions.

Thus, it’s worth looking at where each 4G carrier currently stands in its 4G network buildout. Fierce teamed with wireless coverage firm American Roamer to have a look at exactly where 4G is currently available, and how each carrier stacks up in terms of 4G network rollouts.

Naturally, there are a number of caveats to these maps, including:

  • 4G has become a marketing term, and not all 4G networks are created equal. However, the below maps comprise the networks that are advertised as providing 4G speeds–regardless of the actual speeds of those networks.
  • These maps do not indicate the associated backhaul. So in the case of AT&T, these maps only show the locations where the carrier has deployed the HSPA+ radio access portion of its buildout and not the locations where it has upgraded its backhaul capability.
  • These maps are very, very recent. However, American Roamer said it would not provide specific timestamps on the maps because doing so could divulge the company’s internal processes.

So, let’s get started:

WiMAX, LTE and HSPA+: How it compares

Click here for a larger image

This American Roamer map shows how WiMAX, LTE and HSPA+ stack up against each other. This map does not break out coverage by carrier; instead, it highlights the coverage of WiMAX (Clearwire) vs. LTE (Verizon and MetroPCS) vs. HSPA+ (AT&T and T-Mobile USA). For a look at each carrier’s coverage, read on:

Verizon Wireless’ LTE network

Click here for a larger image.

Verizon Wireless launched its LTE network in December, and currently covers around 110 million people with the network. The carrier advertises download speeds of 5-12 Mbps. The carrier activated 500,000 LTE subscribers in its first quarter.

Clearwire’s WiMAX network

Click here for a larger image.

Clearwire currently covers around 120 million people with its WiMAX network, and advertises speed of 3-6 Mbps with bursts of over 10 Mbps. The carrier counted 6.15 million WiMAX subscribers at the end of its first quarter.

AT&T Mobility’s HSPA+ network

Click here for a larger image.

AT&T Mobility earlier this year designated its HSPA+ network as a 4G offering, after T-Mobile USA made similar claims about its own HSPA+ network. But which flavor of HSPA+ (either 14.4 Mbps or 21 Mbps) is AT&T deploying? “American Roamer can not make statements as to the speed or 3GPP release of the HSPA+ for AT&T at this time,” the company said.

T-Mobile USA’s HSPA+ network

Click here for a larger image.

T-Mobile USA last year launched its HSPA+ network as a 4G service. The company earlier this year said it will launch HSPA+ 42 network technology sometime this year–which supports theoretical peak speeds of 42 Mbps–and will cover around 140 million POPs with HSPA+ 42 by year-end. However, those plans may be affected by AT&T’s proposed $39 billion purchase of T-Mobile, a deal the companies expect to close next year.

So which flavor of HSPA+ is reflected in T-Mobile’s map? “T-Mobile is pretty clear claiming that their HSPA+ is the 21 Mbps variety. American Roamer’s HSPA+ coverage for T-Mobile is reflective of their claim of 21 Mbps,” American Roamer said.

MetroPCS’ LTE network

Click here for a larger image.

MetroPCS last year switched on its LTE network in its first market. The carrier recently completed its LTE buildout in its 14 core markets. Though MetroPCS advertises the network as a 4G offering, the carrier doesn’t provide speeds anywhere near those of Verizon’s LTE network–largely due to the narrow slivers of spectrum MetroPCS has allocated to its LTE network.

iRevolution | Patrick Meier ||| Crisis Mappers: Mobile technology helps disaster victims worldwide

There are now 6.8 billion people on the planet. And about 5 billion cell phones.

This extraordinary ability to connect has turned a modern convenience into a lifeline through a system called crisis mapping. It first gained prominence after the earthquake in Haiti, when people used their cell phones to send text messages to a centralized response team. Since then, crisis mapping has been used to help victims in emergency zones following the tornadoes in the Midwest, the earthquake in Japan and the unrest in the Middle East.

Today, there are hundreds of volunteers in more than 50 countries creating maps of crises around the world, using a system that incorporates the lessons learned in Haiti.

Alison Stewart reports on this worldwide network of volunteers –  regular people — using a breakthrough technology to help others.

http://www-tc.pbs.org/video/media/swf/PBSPlayer.swf

CHI ’11: Enhancing the Human Condition by Janie Chang

The Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2011), being held May 7-12 in Vancouver, British Columbia, provides a showcase of the latest advances in human-computer interaction (HCI).

“The ongoing challenge,” says Desney S. Tan, CHI 2011 general conference chair and senior researcher at Microsoft Research Redmond, “is to make computing more accessible by integrating technology seamlessly into our everyday tasks, to understand and enhance the human condition like never before.”

Microsoft Research has a consistent record of support for CHI through sponsorships and research contributions. This year, Microsoft researchers authored or co-authored 40 conference papers and notes, approximately 10 percent of the total accepted.

This comes as no surprise to Tan.

“Microsoft Research’s goal,” he says, “is to further the state of the art in computer science and technology. As the realms of human and technology become more and more intertwined, Microsoft Research has focused more and more of our effort at the intersection of human and computer, and this is evident from our researchers’ level of participation.”

One unusual contribution comes from Bill Buxton, Microsoft Research principal researcher. Items from Buxton’s impressive accumulation of interactive devices are on display in an exhibit titled “The Future Revealed in the Past: Selections from Bill Buxton’s Collection of Interactive Devices.”

Effects of Community Size and Contact Rate in Synchronous Social Q&A, by Ryen White and Matthew Richardson of Microsoft Research Redmond and Yandong Liu of Carnegie Mellon University, received one of 13 best-paper awards during the conference, as did Your Noise is My Command: Sensing Gestures Using the Body as an Antenna by former Microsoft Research intern Gabe Cohn and visiting faculty member Shwetak Patel, both from the University of Washington, along with Dan Morris and Tan of Microsoft Research Redmond. One of two best-notes awards went to Interactive Generator: A Self-Powered Haptic Feedback Device, co-authored by Akash Badshah, of the Phillips Exeter Academy, a residential high school in Exeter, N.H.; Sidhant Gupta, Cohn, and Patel of the University of Washington; and Nicolas Villar and Steve Hodges of Microsoft Research Cambridge.

The Touch-Sensitive Home

Imagine being freed of physical attachments to input devices because your body is the input device. One approach is to put sensors on the body. The challenge then is to separate actual “signal” from “noise,” such as ambient electromagnetic interference, which overwhelms sensors and makes signal processing difficult. In Your Noise is My Command: Sensing Gestures Using the Body as an Antenna, the researchers turned the problem on its head.

“Can we use that electrical noise as a source of information about where a user is and what that user is doing?” Morris recalls asking. “These are the first experiments to assess whether this is feasible.”

Human body as antenna
The human body behaves as an antenna in the presence of noise radiated by power lines and appliances. By analyzing this noise, the entire home becomes an interaction surface.

The human body is literally an antenna, picking up signals while moving through the noisy electrical environment of a typical home. The researchers tested whether it is possible to identify signals with enough precision to tell what the user is touching and from where. To measure those signals, the researchers placed a simple sensor on each study participant and recorded the electrical signals collected by those sensors. Laptop computers carried in each person’s backpack collected data as the participants performed a series of “gestures,” such as touching spots on walls and appliances or moving through different rooms.

Next came determining whether analysis of this data provided the ability to distinguish between gestures and locations. It was possible in many cases to recognize participants’ actions based solely on the ambient noise picked up by their bodies. For example, once a participant “taught” the algorithms about the noise environment around a particular light switch by demonstrating gestures around the switch, it was possible to determine which of five spots near that switch the user was touching, with an accuracy of better than 90 percent. Similarly, researchers could identify in which room a participant was present at any given time with an accuracy exceeding 99 percent, because the electrical noise environment of each room is distinct.

“It was quite a gratifying series of results,” Morris says. “Now, we are considering how we can package this up into a real-time, interactive system and what innovative scenarios we can enable when we turn your entire home into a touch-sensitive surface.”

The Patient as Medical Display Surface

Reports from the World Health Organization and the American Medical Association confirm that patient noncompliance is a major obstacle to successful medical outcomes in treatment of chronic conditions. Doctor-patient communication has been identified as one of the most important factors for improving compliance. The paper AnatOnMe: Facilitating Doctor-Patient Communication Using a Projection-Based Handheld Deviceifocuses on understanding how lightweight, handheld projection technologies can be used to enhance doctor-patient communication during face-to-face exchanges in clinical settings.

Body, model, and wall as medical display surfaces
Three presentation surfaces: a) body, b) model, and c) wall.

Focusing on physical therapy, co-authors Tao Ni of Virginia Tech—a former Microsoft Research Redmond intern—Amy K. Karlson of Microsoft Research Redmond, and Daniel Wigdor, formerly of Microsoft Research Redmond and now at the University of Toronto, spoke with doctors to understand general communication challenges and design requirements, then built and studied a handheld projection system that flexibly supports the key aspects of information exchange. Doctors can direct handheld projectors at walls or curtains to create an “anywhere” display, or at a patient to overlay useful medical information directly atop the appropriate portion of the anatomy for an augmented-reality view, or “virtual X-ray.”

Reviews and formal lab studies with physical therapists and patients established that handheld projections delivered high value and a more engaging, informative experience than what is traditionally available.

“This is an interesting new space,” Karlson says, “because, despite the prevalence of technology in many medical settings, technology has been relatively absent from face-to-face communication and education opportunities between doctors and patients.

“The coolest part was hearing the positive reactions from study participants when we projected medical imagery directly onto their arms and legs. We got, ‘Wow!’ ‘Cool!’ and ‘I feel like I am looking directly through my skin!’ There seems to be something quite compelling and unique about viewing medical imagery on one’s own body.”

Touch-Free Interactions in the Operating Room

The growth of image-guided procedures in surgical settings has led to an increased need to interact with digital images. In a collaboration with Lancaster University funded by Microsoft Research Connections, Rose Johnson of the Open University in Milton Keynes, U.K.; Kenton O’Hara, Abigail Sellen, and Antonio Criminisi of Microsoft Research Cambridge; and Claire Cousins of Addenbrooke’s Hospital in Cambridge, U.K., address the problem of enabling rich, flexible, but touch-free interaction with patient data in surgical settings. The resulting paper, Exploring the Potential for Touchless Interaction in Image-Guided Interventional Radiology, has received a CHI 20 Honorable Mention paper award.

During treatments such as interventional radiology, images are critical in guiding surgeons’ work; yet because of sterility issues, surgeons must avoid touching input devices such as mice or keyboards. They must navigate digital images “by proxy,” using other members of the surgical team to find the right image, pan, or zoom. This can be onerous and time-consuming.

Complex surgical collaborative environment
This view toward an X-ray table from a computer area shows a surgical team and the complex collaborative environment that touch-free interactions must address.

The research team began fieldwork with the goal of understanding surgeons’ working practices. The researchers are collaborating with surgical teams to develop and evaluate a system. Touchless-interaction solutions such as Kinect for Xbox 360 offer opportunities for surgeons to regain control of navigating through data. There are many challenges, though, in terms of enabling collaborative control of the interface, as well as achieving fluid engagement and disengagement with the system, because the system needs to know which gestures are “for the system” and which are not.

“The most intriguing aspect of this project,” Sellen says, “is the potential to make a real impact on patient care and clinical outcome by reducing the time it takes to do complicated procedures and giving surgeons more control of the data they depend on. From a technical side, it is exciting to see where technologies like Kinect can realize their value outside of the gaming domain.”

Use Both Hands

Touch interfaces are great for impromptu casual interactions, but it is not easy to select a point precisely with your finger or to move an image without rotating it unless there are on-screen menus or handles. In the world of touch, though, such options are not desirable, because they introduce clutter. Rock & Rails: Extending Multi-touch Interactions with Shape Gestures to Enable Precise Spatial Manipulations, by Wigdor, Hrvoje Benko of Microsoft Research Redmond, and John Pella, Jarrod Lombardo, and Sarah Williams of Microsoft, proposes a solution by using recognized hand poses on the surface in combination with touch.

“Rock and Rails” is an extension of the touch-interaction vocabulary. It maintains the direct-touch input paradigm but enables users to make fluid, high degree-of-freedom manipulations while simultaneously providing easy mechanisms to increase precision, specify manipulation constraints, and avoid occlusions. The tool set provides mechanisms for positioning, isolating orientation, and scaling operations using system-recognized hand postures, while enabling traditional, simple, direct-touch manipulations.

Augmenting traditional manipulation techniques with recognized hand postures
The Rock & Rails paper augments a) traditional direct-manipulation gestures with independently recognized hand postures used to restrict manipulations conducted with the other hand: b) rotate, c) resize, and d) 1-D scale. This enables fluid selection of degrees of freedom and, thus, rapid, high-precision manipulation of on-screen content.

The project was a collaborative effort between Microsoft Research and the Microsoft Surface team, so the researchers were able to test their work on real-world designers—the intended audience.

“One of the best moments of the project,” Benko recalls, “was when we realized our gestures could be made ‘persistent’ on the screen. We had transitioned from the model where you had to keep the pose of the hand in order to signal a particular option, to a more relaxed mode where the user could ‘create’ or ‘pin’ a proxy representation of a gesture. This allows users to perform all sorts of wacky combinations of operations without needing to hold the gesture for a long period of time.”

These are just a few of Microsoft Research’s current investigations in how to enhance the ways people can interact with computing devices.

“HCI is all about discovering and inventing technologies that deeply transform people’s lives,” Tan concludes. “Microsoft Research is committed to advancing the state of the art in human-computer interaction.”