The CAT plasma thruster will propel a 5kg satellite into deep space, far beyond Earth orbit, at 1/1000th the cost of previous missions. Learn more by checking out and supporting CAT: A Thruster for Interplanetary CubeSats | Kickstarter Project by Benjamin Longmier, Ph.D.
Platform Architecture for Solar, Thermal and Vibration Energy combining with MPPT and single inductor
The energy harvesting system designed combines energy from thermal,solar and vibrational energy sources. It uses a dual-path architecture having improved efficiencies with solar MPPT and a single off-chip inductor. The IC is designed in a 0.35um digital CMOS process.
On most days, programming is a rewarding experience, with no problem too challenging to solve. Perseverance, intuition, the right tool — they all come together seamlessly to produce elegant, beautiful code.
But then a botched deployment, yet another feature request, or a poorly documented update with crippling dependencies comes crashing headlong into the dream.Sure, we might wish our every effort had enduring impact, that the services our apps rely on would be rock-solid, that we would get the respect we deserve, if only from those who should know better. But the cold, harsh realities of programming get in the way.
[ Find out which 11 programming trends are on the rise, verse yourself in the 12 programming mistakes to avoid, and test your programming smarts with our programming IQ tests: Round 1 and Round 2 and Hello, world: Programming languages quiz. | Keep up on key application development insights with the Fatal Exception blog and Developer World newsletter. ]
Sure, we might wish our every effort had enduring impact, that the services our apps rely on would be rock-solid, that we would get the respect we deserve, if only from those who should know better. But the cold, harsh realities of programming get in the way.
That doesn’t mean the effort isn’t worth it. But it does mean we have some hard truths to face. Here are 10 aspects of programming developers must learn to live with.
Developer hard truth No. 1: It’s all just if-then-else statements
Language designers argue about closures, typing, and amazing abstractions, but in the end, it’s just clever packaging wrapped around good, old if-then-else statements.
That’s pretty much all the hardware offers. Yes, there are op codes for moving data in and out of memory and op codes for arithmetic, but the rest is branch or not branch based on some comparison.
Folks who dabble in artificial intelligence put a more mysterious cloak around these if-then-else statements, but at the end of the day, the clever statistical recommendation engine is going to choose the largest or smallest value from some matrix of numbers. It will perform calculations, then skim through the list, saying, “If this greater, else if this greater, else if this greater,” until it derives its decision.
Developer hard truth No. 2: Most of the Web is just data stored in tables
For the past 20 years, the word “Internet” has tingled with the promise of fabulous wealth, better friendships, cheaper products, faster communication, and everything but a cure for cancer. Yet at its core, most of the Internet is a bunch of data stored in tables.
Match.com? A table of potential dates with columns filled with hair color, religion, and favorite dessert. eBay? It’s a table of deals with a column set to record the highest bid. Blogs? One table with one row for every cranky complaint. You name it; it’s a table.
We like to believe that the Internet is a mystic wizard with divine wisdom, but it’s closer to Bob Cratchit, the clerk from Charles Dickens’ “A Christmas Carol,” recording data in big accounting books filled with columns. It’s an automated file clerk, not the invention of an electronic Gandalf or Dumbledore.
We see this in our programming languages. Ruby on Rails, one of the most popular comets to cross the Web, is a thin veneer over a database. Specify a global variable and Rails creates a column for you because it knows it’s all about building a table in a database.
Oh, and the big, big innovation that’s coming 20 years into the game is the realization that we don’t always need to fill up every column of the table. That’s NoSQL for you. It may try to pretend to be something other than a table, but it’s really a more enlightened table that accepts holes.
Developer hard truth No. 3: Users have minds of their own
You might think that the event listener you created for your program and labeled “save” has something to do with storing a copy of the program’s state to disk. In reality, users will see it as a magic button that will fix all of the mistakes in their ruined document, or a chance to add to their 401(k), something to click to open up the heavens and lead to life eternal.
In other words, we might like to think we’ve created the perfect machine, but the users beat us every time. For every bulletproof design we create to eliminate the chance of failure, they come up with one combination of clicks to send the machine crashing without storing anything on disk. For every elegant design, they find a way to swipe or click everything into oblivion.
There are moments when users can be charming, but for the most part, they are quirky and unpredictable — and can be very demanding. Programmers can try to guess how and where these peculiarities will arise when users are confronted with the end result of code, but they’ll probably fail. Most users aren’t programmers, and asking a programmer to think like the average user is like asking a cat to think like a dog.
This goes beyond simple cases of user stupidity. No matter how clever your invention or elegant your code, it still has to catch on. Predicting that users will not balk at a 140-character limit for expressing ire and desires is no easy business.
Developer hard truth No. 4: Most of what you code will never be used
Somehow it feels good to know that your new software can speak XML, CSV, and Aramaic. Excuse me; our implementation team would like to know if this can decode Mayan hieroglyphics because we might need that by the end of 2012. If it doesn’t have that feature, we’ll be OK, but it will be so much easier to get the purchase order signed if you could provide that. Thanks.
The users, of course, could care less. They want one button and even that one button can confuse them. The wonderful code you wrote to support the other N-1 buttons might get executed when the QA team comes through, but beyond that, there is no guarantee the sprints and all-nighters will have been anything more than busywork and bureaucracy.
Programmers don’t even get the same boost as artists, who can always count on selling a few copies of their work to their parents and relatives. Our parents won’t come through and run the extra code on the feature that just had to be implemented because someone in a brainstorm thought it would be a game changer.
Developer hard truth No. 5: Scope creep is inevitable
One manager I know told me his secret was to always smile and tell his team he loved what they were doing, even if it was terrible. Then on the way out the door, he would say, “Oh, one more thing.” That thing was often a real curveball that upended the project and sent everyone back to redesigning the application.
Scope creep is almost a direct consequence of the structure of projects. The managers do all of the hard work with spreadsheets before it begins. They concoct big dreams and build economic models to justify the investment.
All the hard work ends once they bring in the developers. Suddenly the managers have nothing to do but fret. Is that button in the right space? Should the log-in page look different? And fretting leads to ideas and ideas lead to requests for changes.
They love to use phrases like “while you’re mucking around in there” or “while you’ve got the hood up.” This is what happens to projects, and it’s been happening for years. After all, even Ada Lovelace’s analytical engine, considered by most to be the first computer program, endured its own form of scope creep, born of nearly a year spent augmenting notes.
Developer hard truth No. 6: No one understands you — especially the boss
There are two kinds of programmers: those who work for bosses who can’t program and don’t know how hard it can be to make your code compile, and those who work for former programmers who’ve forgotten how hard it can be to make your code compile.
Your boss will never understand you or your work. It’s understandable when the liberal arts major in business development gets an idea that you can’t solve without a clairvoyant computer chip. They couldn’t know better.
This truth has one advantage: If the boss understood how to solve the problem, the boss would have stayed late one night and solved it. Hiring you and communicating with you is always more time consuming than doing it.
Developer hard truth No. 7: Privacy is a pain
We want our services to protect our users and their information. But we also want the sites to be simple to operate and responsive. The click depth — the number of clicks it takes to get to our destination — should be as shallow as possible.
The problem is that privacy means asking a few questions before letting someone dig deeper. Giving people control over the proliferation of information means adding more buttons to define what happens.
Privacy also means responsibility. If the user doesn’t want the server to know what’s going on, the user better take responsibility because the server is going to have trouble reading the user’s mind. Responsibility is a hassle and that means that privacy is a hassle.
Privacy can drive us into impossible logical binds. There are two competing desires: One is to be left alone, and the other is to be sent a marvelous message. One desire offers the blissful peace with no interruptions, and the other can bring an invitation or a love letter, a job offer, a dinner party, or just a free offer from your favorite store.
Alas, you can’t have one without the other. Fighting distractions will also drive off the party invitations. Hiding your email address means that the one person who wants to find you will be pulling out their hair looking for a way to contact you. In most cases, they’ll simply move on.
Developer hard truth No. 8: Trust isn’t cheap
The promise of Web 2.0 sounded wonderful. Just link your code to someone else’s and magic happens. Your code calls theirs, theirs calls yours, and the instructions dance together like Fred and Ginger.
If only it were that easy. First, you have to fill out all these forms before they let you use their code. In most cases, your lawyers will have a fit because the forms require you to sign away everything. What do you get in return? Hand-waving about how your code will maybe get a response from their code some of the time. Just trust us.
Who could blame them, really? You could be a spammer, a weirdo, or a thief who wants to leverage Web 2.0 power to work a scam. They have to trust you, too.
And the user gets to trust both of you. Privacy? Sure. Everyone promises to use the best practices and the highest-powered encryption software while sharing your information with everyone under the sun. Don’t worry.
The end result is often more work than you want to invest in a promise that kinda, sorta delivers.
Developer hard truth No. 9: Bitrot happens
When you start, you can grab the latest versions of the libraries and everything works for a week or two. Then version 1.0.2 of library A comes along, but it won’t work with the latest version of library B because A’s programmers have been stuck on the previous big release. Then the programmers working on C release some new feature that your boss really wants you to tap. Naturally it only works with version 1.0.2.
When houses and boats rot, they fall apart in one consistent way. When code rots, it falls apart in odd and complex ways. If you really want C, you have to give up B. If you choose B, you’ll have to tell your boss that C isn’t a real option.
This example used only three libraries. Real projects use a dozen or more, and the problems grow exponentially. To make matters worse, the rot doesn’t always present itself immediately. Sometimes it seems like the problem is only in one unimportant corner that can be coded around. But often this tiny incompatibility festers and the termites eat their way through everything until it all collapses.
The presence of bitrot is made all the more amazing by the fact that computer code doesn’t wear out. There are no moving parts, no friction, no oxidation, and no carbon chains acting as bait for microbes. Our code is an eternal statement that should be just as good in 100 years as it was on the day it was written. Yet it isn’t.
The only bright spots are the emulators that allow us to run that old Commodore 64 or Atari code again and again. They’re wonderful museums that keep code running forever — as long as you fight the bitrot in the emulator.
Developer hard truth No. 10: The walled garden will flourish
For all the talk about the importance of openness, there’s more and more evidence that only a small part of the marketplace wants it. To make things worse, they’re often not as willing to pay for the extra privilege. The free software advocates want free as in speech and free as in beer. Few are willing to pay much for it.
That may be why the biggest adopters of Linux and BSD come wrapped in proprietary code. Devices like TiVo may have Linux buried inside, but the interface that makes them great isn’t open. The same goes for the Mac.
The companies that ship Linux boxes, however, have trouble competing against Windows boxes. Why pay about the same price for Linux when you can buy a Windows machine and install Linux alongside?
Walled gardens flourish when people will pay more for what’s inside, and we’re seeing more and more examples of cases when the people will pay the price of admission. Mac laptops may cost two to three times as much as a commodity PC, yet the stores are packed to the limit imposed by the fire code.
The walls are getting thicker. At the launch of the third iPad, Apple bragged about shipping millions and millions of post-PC devices. Deep inside an iPhone is an open source operating system, but only a tiny percentage of customers even know this. Until people know and care about this features, walled gardens will thrive.
- 10 hard truths developers must learn to accept (techworld.com.au)
- 8 cool tools for data analysis, visualization and presentation (techworld.com.au)
TAU technology spots environmental hazards from inches to light-years away
The world may seem painted with endless color, but physiologically the human eye sees only three bands of light — red, green, and blue. Now a Tel Aviv University – developed technology is using colors invisible to the naked eye to analyze the world we live in. With the ability to detect more than 1,000 colors, the “hyperspectral” (HSR) camera, like Mr. Spock’s sci-fi “Tricorder,” is being used to “diagnose” contaminants and other environmental hazards in real time.
Prof. Eyal Ben-Dor of TAU’s Department of Geography and the Human Environment says that reading this extensive spectrum of color allows the sensor to analyze 300 times more information than the human brain can process. Small and easy to use, the sensor can provide immediate, cost-effective, and accurate monitoring of forests, urban areas, agricultural lands, harbors, or marinas — areas which are often endangered by contaminants and phenomena such as soil erosion or sediment dust. Using the hyperspectral camera will ultimately lead to better protection and treatment of the environment.
The HSR sensor, detailed in the journal Remote Sensing of Environment, has both commercial and scientific applications, says Prof. Ben-Dor, who has consulted for local and foreign space agencies in their use of the technology. These applications can include anything from helping companies adhere to regulations on environmental contamination to measuring the extent of environmental damage caused by forest fires.
From far and wide
The sensor interprets reflected sunlight radiation that bounces off an object, material, or environment. Each reflected color represents a different chemical reaction between two compounds. “A combination of absorption or reflection of energy creates the color that the HSR sensor sees,” explains Prof. Ben-Dor. The sensor’s extensive range — reading information from as close as 0.4 inches and as far as 500 miles away — means it can be placed anywhere from the ground itself to unmanned aircraft, satellites or weather balloons. The camera can also be pointed towards the stars to help astronomers gain insight into the make-up of a planet’s atmosphere.
Most recently, Prof. Ben-Dor has used the technology to survey different environments, including soil and sea, seeking to identify problem areas. The area around gas pipelines is one site of environmental contamination, he says. Leaks can be particularly damaging to the surrounding earth, so the sensors can be used to test along a pipeline for water content, organic matter, and toxins alike. In agricultural areas, the sensor can be used to determine levels of salt in the soil to save crops before they are destroyed.
The technique is also effective in marinas, which are highly contaminated by gasoline and sealants from the undersides of sea vessels. “This toxic material sinks, and becomes concentrated on the sediment of the marina, which also contaminates nearby beaches,” Prof. Ben-Dor explains.
The color of possibility
Before the HSR technology was developed, samples of potentially contaminated or endangered soil, sediment or water would have to be taken to the lab for lengthy analysis. With the use of a hyperspectral sensor, real-time analysis allows immediate action to better environmental conditions. The sensor can also be used to determine levels of indoor pollution caused by dust, analyze the strength of concrete being used for buildings in earthquake zones, or scan the environment around an open mine to look at the impact on human health.
According to Prof. Ben-Dor, this technology’s potential is endless and can be used in disciplines such as medicine, pharmacology, textile industry, and civil engineering. Without so much as a touch, the sensor can provide in-depth analysis on environmental composition. It’s a method that can map and monitor the earth from “microscope to telescope,” he says.
Computers do not suffer from the same frailties as humans and, as a result, have greater capacity to achieve in certain areas
A major shift in the way people interact with computers is coming. And it is something that we badly need. The problems we face in our societies are growing ever more complex, but our human cognitive capacities remain unchanged. Modern information technology helps, to be sure. But the current model of “software as tool” is ultimately limited. Times change, and our software needs to change with them, ideally without the intervention of a priesthood of technical experts. I believe as artificial intelligence advances, a new model – “software as collaborator” – will become possible, with tremendous potential benefits.
Collaborators adapt to each other, playing off each other’s strengths, so that the whole is greater than the sum of the parts. Software collaborators could be designed to be enough like people that this mutual adaptation is possible, and that we can understand and trust their contributions. But, we should also be able to design them without certain human frailties. People tend to only look for evidence that confirms their hypotheses – called confirmation bias – and have other things on their minds, such as their life outside of work.
Software collaborators that do not share these frailties could become valuable complements to individuals and to teams. We are still a long way from being able to build software collaborators, but there is important progress being made in many fronts in artificial intelligence. For example, IBM’s Watson shows how a combination of AI techniques can combine synergistically to perform question-answering at a level that no one thought was possible a few years ago. Machine-reading techniques were used to assimilate vast collections of documents into internal representations that supported multiple forms of reasoning. Machine learning techniques were used to determine which strategies were likely to succeed for different types of questions. Massive hardware power was harnessed to provide real-time responses, capable of performing at the level of the best humans at its task. Such a system takes a step towards the collaborator model, by adapting to the human world – instead, of humans adapting to the IT world.
But this is only a first step. Collaborators engage in dialogue, with follow-up questions being interpreted with respect to the ongoing conversation. Such dialogues can include sketching and gestures, as well as text and speech – called ‘multimodal dialogues’. Many researchers are working on sketch understanding, vision for understanding gestures and facial expressions. And Microsoft’s Kinect will catalyse even more work in this area, and dialogue understanding. Collaborators work for long time spans, ranging from hours to years, tracking changing information, updating models to maintain situational awareness and learning as they go.
Building robust systems that can reason and learn over a vast range of knowledge remains an exciting open challenge. Many in the artificial intelligence community are addressing this question, from a variety of perspectives. Cognitive architectures offer one intriguing approach, in trying to model cognition in the “large” – as opposed to narrow technical areas. Often this work is performed in collaboration with other cognitive scientists – since understanding how people reason, learn and interact provides valuable clues for creating intelligent systems.
Watson’s enormous computing requirements may seem to limit the potential for future systems, which will require even more computation than it used. Although, yesterday’s supercomputer is tomorrow’s smartphone and within a few years of Deep Blue’s victory at chess in 1997, there were programs that performed at similar levels without special hardware. So assuming artificial intelligence – and computer science and engineering, more broadly – remains on-course, we should be able to create software collaborators.
Kenneth D. Forbus is chairman of the Cognitive Science Society, in the United States. This article first appeared in PublicServiceEurope.com’s sister title Public Service Review: European Science & Technology
The 1/4-inch OV8850 leads the CMOS sensor pixel design race in the smartphone market by enabling autofocus modules that are 20 percent slimmer than today’s 1/3.2-inch 8-megapixel modules. Besides a small footprint, the 1.1-micron OmniBSI-2 pixel offers significant improvements in power efficiency and comparable image quality to the previous generation 1.4-micron OmniBSI™ pixel, making it an attractive solution for next-generation smartphones and tablets.
An integrated scaler allows the camera to maintain full field of view in 1080p/30 high-definition (HD) video and preview modes and provides extra adjustable resolution for electronic image stabilization (EIS). Additionally, the sensor’s 2 x 2 binning functionality provides EIS for 720p/60 HD video recording. Other advanced features of the OV8850 include an on-chip temperature sensor, two PLLs, context switching, 4 Kbits of one-time programmable memory, lens shading correction, defective pixel cancelling, black sun elimination, and alternate row exposure for high dynamic range (HDR) video and still image capture.
The OV8850 supports 8 and 10-bit RAW image output with all standard image quality control functions supported through the SCCB interface. The sensor fits in an 8.5 x 8.5 mm autofocus camera module with a build height of 4.7 mm and features a 4-lane MIPI/LVDS that facilitates the required high data transfer rate.
|Package Size||5550 x 5400 µm|
|Array Size||3280 x 2464 µm|
|Pixel Size||1.1 µm|
|Frame Rate||30 @ EIS1080p
24 @ Full
60 @ EIS720p
|Temperature||Stable: 0° – 65°C
Operating: -30° – 70°C
|Product Brief||Product Brief|
Way back in 2010, Apple spent some of its fast-amassing cash pile to buy Polar Rose, a face recognition firm from Sweden. Now it seems it’s been busy ever since incorporating Polar Rose’s face identificationand tracking algorithms into iOS5–its upcoming revision of the operating system that powers iPhones and iPads. So deep is the integration–it’s far beyond a simple app–that there’re API handles.
This is huge news, for all the reasons that Google’s use of face recognition in its online offerings could change much about the web. By adding controls into iOS’ API, Apple’s allowing third-party apps to access the core face recognition tech. Code like “hasLeftEyePosition,” “mouthPosition” and the image-processing for identification means that apps can track faces and also recognize users.
This means games can track face positions for an unusual mode of input, apps like Instagram could automatically tag people’s faces they can identify, smart video apps could use facial cues to do digital image stabilization and so on. In more interactive modes, we can even imagine iOS face IDs on an iPhone being used as an automatic log-in on a paired Mac. And it’s even plausible that Apple may be using facial recognition as part of its secure user authentication for future wireless wave-and-pay systems, which we know it’s been working on.
But wait, there’s more. Another relatively recent Apple purchase, Siri, is also showing up in the latest developer builds of iOS5, alongside evidence that Apple’s including code it acquired as part of its deal with voice recognition experts Nuance. Siri was a highly promising smart personal assistant app, and until now it’s entirely disappeared, so the fact it’s showing up in iOS5 is interesting. And it could be transformational. Because what Apple seems to be doing is enable smart voice control in iOS5 along the lines of “set up a meeting with mark on wednesday at 11 a.m.,” where Mark is a user contact. There’re also text-to-speech powers, which could be really important for using your phone while driving–we can imagine an iPhone reading out incoming SMSs, and also a smarter integrated navigation app (which we know Apple’s also working on).
In this sense, Apple’s moving the iPhone and iPad toward the famous Knowledge Navigator concept it created back in the 1980s. And we are thus tempted to think it’ll only work in full on newer devices–possibly just the iPad 3, the upcoming iPhone 5 (and maybe the current generation too): Apple prefers to make its enhanced user experiences “all or nothing,” implying that the degraded performance older devices offer for new high-tech software is too disappointing to users.
And it’s also a powerful new weapon in the war against Android tablets and phones. When the Android Nexus One first emerged, we called its integration of voice control an important secret feature. But it’s never been properly realized, much less spun into the tightly integrated smart “digital PA” which Apple seems to be working toward. By adding in all this tech, Apple’s enabling all sorts of clever marketing angles, and is even appealing to business users a little more–something it seems keen on at a corporate level.
The piezoelectrically modulated resistive memory (PRM) devices take advantage of the fact that the resistance of piezoelectric semiconducting materials such as zinc oxide (ZnO) can be controlled through the application of strain from a mechanical action. The change in resistance can be detected electronically, providing a simple way to obtain an electronic signal from a mechanical action.
“We can provide the interface between biology and electronics,” said Zhong Lin Wang, Regents professor in the School of Materials Science and Engineering at the Georgia Institute of Technology. “This technology, which is based on zinc oxide nanowires, allows communication between a mechanical action in the biological world and conventional devices in the electronic world.”
The research was reported online June 22 in the journal Nano Letters. The work was sponsored by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), the U.S. Air Force and the U.S. Department of Energy.
In conventional transistors, the flow of current between a source and a drain is controlled by a gate voltage applied to the device. That gate voltage determines whether the device is on or off.
The piezotronic memory devices developed by Wang and graduate student Wenzhuo Wu take advantage of the fact that piezoelectric materials like zinc oxide produce a charge potential when they are mechanically deformed or otherwise put under strain. These PRM devices use the piezoelectric charge created by the deformation to control the current flowing through the zinc oxide nanowires that are at the heart of the devices – the basic principle of piezotronics. The charge creates polarity in the nanowires – and increases the electrical resistance much like gate voltage in a conventional transistor.
“We are replacing the application of an external voltage with the production of an internal voltage,” Wang explained. “Because zinc oxide is both piezoelectric and semiconducting, when you strain the material with a mechanical action, you create a piezopotential. This piezopotential tunes the charge transport across the interface – instead of controlling channel width as in conventional field effect transistors.”
An array of piezoelectrically modulated resistive memory (PRM) cells is shown being studied in an optical microscope. Credit: Gary Meek
The mechanical strain could come from mechanical activities as diverse as signing a name with a pen, the motion of an actuator on a nanorobot, or biological activities of the human body such as a heart beating.
“We control the charge flow across the interface using strain,” Wang explained. “If you have no strain, the charge flows normally. But if you apply a strain, the resulting voltage builds a barrier that controls the flow.”
The piezotronic switching affects current flowing in just one direction, depending whether the strain is tensile or compressive. That means the memory stored in the piezotronic devices has both a sign and a magnitude. The information in this memory can be read, processed and stored through conventional electronic means.
Taking advantage of large-scale fabrication techniques for zinc oxide nanowire arrays, the Georgia Tech researchers have built non-volatile resistive switching memories for use as a storage medium. They have shown that these piezotronic devices can be written, that information can be read from them, and that they can be erased for re-use. About 20 of the arrays have been built so far for testing.
The zinc oxide nanowires, which are about 500 nanometers in diameter and about 50 microns long, are produced with a physical vapor deposition process that uses a high-temperature furnace. The resulting structures are then treated with oxygen plasma to reduce the number of crystalline defects – which helps to control their conductivity. The arrays are then transferred to a flexible substrate.
“The switching voltage is tunable, depending on the number of oxygen vacancies in the structure,” Wang said. “The more defects you quench away with the oxygen plasma, the larger the voltage that will be required to drive current flow.”
The piezotronic memory cells operate at low frequencies, which are appropriate for the kind of biologically-generated signals they will record, Wang said.
Image shows an array of piezoelectrically modulated resistive memory (PRM) cells on which metal electrodes have been patterned using lithography. Credit: Gary Meek
These piezotronic memory elements provide another component needed for fabricating complete self-powered nanoelectromechanical systems (NEMS) on a single chip. Wang’s research team has already demonstrated other key elements such as nanogenerators, sensors and wireless transmitters.
“We are taking another step toward the goal of self-powered complete systems,” Wang said. “The challenges now are to make them small enough to be integrated onto a single chip. We believe these systems will solve important problems in people’s lives.”
Wang believes this new memory will become increasingly important as devices become more closely connected to individual human activities. The ability to build these devices on flexible substrates means they can be used in the body – and with other electronic devices now being built on materials that are not traditional silicon.
“As computers and other electronic devices become more personalized and human-like, we will need to develop new types of signals, interfacing mechanical actions to electronics,” he said. “Piezoelectric materials provide the most sensitive way to translate these gentle mechanical actions into electronic signals that can be used by electronic devices.”
Low power chip designer, ARM is putting up to a million of its processors into a new breed of computer that aims to replicate the way the brain works and several million of its dollars into a new Cambridge cleantech venture.
Prof Steve Furber, who co-designed the ARM processor with Sophie Wilson while at Acorn Computers in Cambridge is leading the SpiNNaker (Spiking Neural Network architecture) project – a massively-parallel chip multiprocessor system that mimics how nerve cells in the brain interact.
Meanwhile, ARM has co-led a $7m Series A investment into Amantys, a one year startup developing power control technology that reduces the amount of energy lost in the power conversion process. The startup says it can “address power losses all the way from wind and solar photovoltaic modules, transmission grids and transformers through to electric motors and electric vehicles.”
Amantys, which is staffed by a team of former ARM execs and Dr Patrick Palmer of Cambridge University’s department of engineering, says it is aiming to release its first products by Q4 of this year. The funding round was co-led by Moonray Investors, part of Fidelity International.
Amantys says it is looking to recruit ‘analogue design gurus’, embedded software and power electronics engineers
ARM has quietly assembled an investment portfolio worth $40m and containing 15 companies.
Principal designer of the BBC Microcomputer as well as the ARM 32-bit RISC microprocessor, Prof Furber is now ICL Professor Of Computer Engineering at University of Manchester. He is working with scientists from the universities of Cambridge, Southampton and Sheffield as well as industrial partners – foremost among them, ARM – to develop a massive computer, nicknamed, the ‘brain box.’
By emulating the networks of billions of neurons in the brain using ARM processors, the hope is that scientists will gain a greater understanding of how processing in the brain works – including how damage to the brain interferes with it – but also that these biological models will lead to more efficient and fault-tolerant computers.
Professor Furber said: “Developing and understanding the information processing in the brain is the key. We are actively engaging with neuroscientists and psychologists, both here at the University and elsewhere.
“This could ultimately be of great help for patients, for example, who have presented with reading problems caused by strokes or similar brain injuries. Psychologists have already developed neural networks on which they can reproduce the clinical pathologies. At present they are limited in the fidelity they can achieve with these networks by the available computer power, but we hope that SpiNNaker will raise that bar a lot higher.”
The project has received funding of £5m from EPSRC.
The chips that will power the system – designed in Manchester and manufactured in Taiwan – were delivered from the foundry last month and with 18 ARM processors on board every chip, they will dramatically increase the number of brain cell interactions that can be modeled compared to earlier test systems.
Although there will eventually be up to one million Arm processors in SpiNNaker, making it capable of modelling a billion neurons in real time, this is still only around 1per cent of the human brain.
In the brain, neurons emit spikes which are relayed as tiny electrical signals. Each impulse is modelled in SpiNNaker as a ‘packet’ of data, which is sent to all connected neurons. Neurons are represented by simple equations which are solved in real-time by software running on the Arm processors.
Acorn Computers co-founder, Hermann Hauser, who describes Prof Furber as one of the smartest people he has met, told New Electronics in December 2010 that he was “keeping an entrepreneurial eye” on his latest work. He told the publication: “There is potential in the way the demonstrator works that one can build a computer that can do certain things others cannot. The sort of the things humans are very good at and computers are not.”
Arm was approached in May 2005 to participate in the SpiNNaker project. A subsequent agreement paved the way to make ARM processor IP available to the project, along with ARM cell library IP to aid design and manufacturing.
Mike Muller, CTO at ARM said: “SpiNNaker seeks to create a working model of the ultimate smart system, the human brain. Steve is part of the Arm family, so this project was a perfect way to partner with him and Manchester University, and for ARM to encourage leading research in the UK.”
Researchers at Stanford University have demonstrated a set of materials that could enable solar cells to use a band of the solar spectrum that otherwise goes to waste. The materials layered on the back of solar cells would convert red and near-infrared light—unusable by today’s solar cells—into shorter-wavelength light that the cells can turn into energy. The university researchers will collaborate with the Bosch Research and Technology Center in Palo Alto, California, to demonstrate a system in working solar cells in the next four years.
Even the best of today’s silicon solar cells can’t use about 30 percent of the light from the sun: that’s because the active materials in solar cells can’t interact with photons whose energy is too low. But though each of these individual photons is low energy, as a whole they represent a large amount of untapped solar energy that could make solar cells more cost-competitive.
The process, called “upconversion,” relies on pairs of dyes that absorb photons of a given wavelength and re-emit them as fewer, shorter-wavelength photons. In this case, the Bosch and Stanford researchers will work on systems that convert near-infrared wavelengths (most of which are unusable by today’s solar cells). The leader of the Stanford group, assistant professor Jennifer Dionne, believes the group can improve the sunlight-to-electricity conversion efficiency of amorphous-silicon solar cells from 11 percent to 15 percent.
The concept of upconversion isn’t new, but it’s never been demonstrated in a working solar cell, says Inna Kozinsky, a senior engineer at Bosch. Upconversion typically requires two types of molecules to absorb relatively high-wavelength photons, combine their energy, and re-emit it as higher-energy, lower-wavelength photons. However, the chances of the molecules encountering each other at the right time when they’re in the right energetic states are low. Dionne is developing nanoparticles to add to these systems in order to increase those chances. To make better upconversion systems, Dionne is designing metal nanoparticles that act like tiny optical antennas, directing light in these dye systems in such a way that the dyes are exposed to more light at the right time, which creates more upconverted light, and then directing more of that upconverted light out of the system in the end.
The ultimate vision, says Dionne, is to create a solid. Sheets of such a material could be laid down on the bottom of the cell, separated from the cell itself by an electrically insulating layer. Low-wavelength photons that pass through the active layer would be absorbed by the upconverter layer, then re-emitted back into the active layer as usable, higher-wavelength light.
Kozinsky says Bosch’s goal is to demonstrate upconversion of red light in working solar cells in three years, and upconversion of infrared light in four years. Factoring in the time needed to scale up to manufacturing, she says, the technology could be in Bosch’s commercial solar cells in seven to 10 years.
Researchers have discovered a way to capture and harness energy transmitted by such sources as radio and television transmitters, cell phone networks and satellite communications systems. By scavenging this ambient energy from the air around us, the technique could provide a new way to power networks of wireless sensors, microprocessors and communications chips.
“There is a large amount of electromagnetic energy all around us, but nobody has been able to tap into it,” said Manos Tentzeris, a professor in the Georgia Tech School of Electrical and Computer Engineering who is leading the research. “We are using an ultra-wideband antenna that lets us exploit a variety of signals in different frequency ranges, giving us greatly increased power-gathering capability.”
Tentzeris and his team are using inkjet printers to combine sensors, antennas and energy-scavenging capabilities on paper or flexible polymers. The resulting self-powered wireless sensors could be used for chemical, biological, heat and stress sensing for defense and industry; radio-frequency identification (RFID) tagging for manufacturing and shipping, and monitoring tasks in many fields including communications and power usage.
A presentation on this energy-scavenging technology was scheduled for delivery July 6 at the IEEE Antennas and Propagation Symposium in Spokane, Wash. The discovery is based on research supported by multiple sponsors, including the National Science Foundation, the Federal Highway Administration and Japan’s New Energy and Industrial Technology Development Organization (NEDO).
Communications devices transmit energy in many different frequency ranges, or bands. The team’s scavenging devices can capture this energy, convert it from AC to DC, and then store it in capacitors and batteries. The scavenging technology can take advantage presently of frequencies from FM radio to radar, a range spanning 100 megahertz (MHz) to 15 gigahertz (GHz) or higher.
Scavenging experiments utilizing TV bands have already yielded power amounting to hundreds of microwatts, and multi-band systems are expected to generate one milliwatt or more. That amount of power is enough to operate many small electronic devices, including a variety of sensors and microprocessors.
And by combining energy-scavenging technology with super-capacitors and cycled operation, the Georgia Tech team expects to power devices requiring above 50 milliwatts. In this approach, energy builds up in a battery-like super-capacitor and is utilized when the required power level is reached.
The researchers have already successfully operated a temperature sensor using electromagnetic energy captured from a television station that was half a kilometer distant. They are preparing another demonstration in which a microprocessor-based microcontroller would be activated simply by holding it in the air.
Exploiting a range of electromagnetic bands increases the dependability of energy-scavenging devices, explained Tentzeris, who is also a faculty researcher in the Georgia Electronic Design Center (GEDC) at Georgia Tech. If one frequency range fades temporarily due to usage variations, the system can still exploit other frequencies.
The scavenging device could be used by itself or in tandem with other generating technologies. For example, scavenged energy could assist a solar element to charge a battery during the day. At night, when solar cells don’t provide power, scavenged energy would continue to increase the battery charge or would prevent discharging.
Utilizing ambient electromagnetic energy could also provide a form of system backup. If a battery or a solar-collector/battery package failed completely, scavenged energy could allow the system to transmit a wireless distress signal while also potentially maintaining critical functionalities.
The researchers are utilizing inkjet technology to print these energy-scavenging devices on paper or flexible paper-like polymers — a technique they already using to produce sensors and antennas. The result would be paper-based wireless sensors that are self-powered, low-cost and able to function independently almost anywhere.
To print electrical components and circuits, the Georgia Tech researchers use a standard-materials inkjet printer. However, they add what Tentzeris calls “a unique in-house recipe” containing silver nanoparticles and/or other nanoparticles in an emulsion. This approach enables the team to print not only RF components and circuits, but also novel sensing devices based on such nanomaterials as carbon nanotubes.
When Tentzeris and his research group began inkjet printing of antennas in 2006, the paper-based circuits only functioned at frequencies of 100 or 200 MHz, recalled Rushi Vyas, a graduate student who is working with Tentzeris and graduate student Vasileios Lakafosis on several projects.
“We can now print circuits that are capable of functioning at up to 15 GHz — 60 GHz if we print on a polymer,” Vyas said. “So we have seen a frequency operation improvement of two orders of magnitude.”
The researchers believe that self-powered, wireless paper-based sensors will soon be widely available at very low cost. The resulting proliferation of autonomous, inexpensive sensors could be used for applications that include:
• Airport security: Airports have both multiple security concerns and vast amounts of available ambient energy from radar and communications sources. These dual factors make them a natural environment for large numbers of wireless sensors capable of detecting potential threats such as explosives or smuggled nuclear material.
• Energy savings: Self-powered wireless sensing devices placed throughout a home could provide continuous monitoring of temperature and humidity conditions, leading to highly significant savings on heating and air-conditioning costs. And unlike many of today’s sensing devices, environmentally friendly paper-based sensors would degrade quickly in landfills.
• Structural integrity: Paper or polymer-based sensors could be placed throughout various types of structures to monitor stress. Self-powered sensors on buildings, bridges or aircraft could quietly watch for problems, perhaps for many years, and then transmit a signal when they detected an unusual condition.
• Food and perishable-material storage and quality monitoring: Inexpensive sensors on foods could scan for chemicals that indicate spoilage and send out an early warning if they encountered problems.
• Wearable bio-monitoring devices: This emerging wireless technology could become widely used for autonomous observation of patient medical issues.
Explosive ordnance disposal team leaders say their technology is behind the curve
The U.S. military’s cadre of bomb disposal technicians needs lighter equipment, the ability to detect explosives at stand-off distances and their sensors consolidated into one handheld device.
But most of all, they want to feel that their technology is putting them one step ahead of the insurgents who are planting the improvised explosive devices that are wounding and killing U.S. troops in Iraq and Afghanistan.
Instead — when it comes to tools that can defeat IEDs — the Defense Department has been playing a game of catch-up for the past 10 years.
“Our acquisition process inside the Department of Defense does not have the agility to keep up with our enemy’s threat,” said Capt. Dan Coleman of the Navy expeditionary warfare division and a former officer at the Joint Improvised Explosive Device Defeat Organization.
Requirements for defeating, detecting or protecting troops from IEDs must go through a bureaucratic approval process, the joint capability integration development system, fight for funding and then — after a long wait — the explosive ordnance disposal teams finally receive what they asked for, he said.
By that time, “our enemy is going to be three or four more … cycles ahead of that solution that we have just fielded to the war fighter,” Coleman said at a National Defense Industrial Association-Explosive Ordnance Disposal Memorial Foundation conference in Fort Walton Beach, Fla.
The number of EOD technicians is relatively small — about 5,500 spread out across the four services. Most of them “self-select” to join the units. Their deeds have been celebrated in the Academy Award-winning film, The Hurt Locker.
While they are few in number, their impact on the battlefield is crucial, said Army Col. Marue “MO” Quick, chief of the EOD and humanitarian mine action at the office of the secretary of defense’s special operations/low intensity conflict division.
IEDs were the weapon of choice in Iraq, and the tactic has made its way to Afghanistan. In both wars, the majority of combat deaths and injuries are a result of these bombs, she said.
Meanwhile, in the 12-month period from May 2010 to this year’s conference, 20 bomb technicians lost their lives in combat, and 94 were wounded, Quick said. EOD technicians have responded to some 112,000 calls for their services in Iraq and 45,000 in Afghanistan, she added.
While Quick, Coleman and other speakers said the long wars have resulted in EOD forces being the best equipped and most experienced since the specialty emerged during World War II, there is still a constant need to keep pace with new tactics being employed by the bomb makers.
“While we have made tremendous progress and significant improvements in equipment and training over the last 10 years, we must remain vigilant and focused in staying in front of the dynamic and evolving nature of our enemy’s threat,” Quick said.
Coleman put it in more blunt terms: “We can’t go back to shooting behind the duck in terms of technology to defeat this IED threat.”
The research and development community needs to get ahead of the curve and look at the potential ways enemies will use bombs in the future. As a Navy officer, for example, Coleman said he worries about submersible IEDs, a threat that has not emerged, but could someday.
“In the last 10 years we have come from being underfunded, under-resourced, and under-equipped to catching up to the fight,” said Coleman. But that is what it is: a game of catch-up, he added.
Col. Dick Larry, chief of the adaptive Counter-IED/EOD solutions division at the Department of the Army headquarters, said, “Our adversary changes quicker than we do.”
An insurgent “has no bureaucracy. He can do things much quicker than I can do. Whenever I come up with a new jammer, I’ve got to look three moves ahead. What have I forced him to do now that I have this new jammer?” said Larry.
The services’ bomb technicians have several tools to help them with their inherently dangerous work. The radio-frequency jammers to which Larry referred prevent insurgents from detonating bombs through the airwaves. Bomb suits provide some protection in the event that an IED explodes. Robots can provide a view of a bomb from a safe distance, and their manipulators can sometimes be used to render them safe without the specialists needing to put on the cumbersome suits. Metal detectors have been around since World War II. Recently, ground penetrating radar, which can see nonmetallic shapes, have been integrated onto the metal detectors. Explosives used to detonate IEDs in a controlled manner are also employed.
EOD specialists also gather evidence that is turned over to units such as Joint Task Force Paladin, which goes after the networks of bomb makers and those who fund the operations.
The Afghanistan surge is an example of how the Defense Department is yet again playing catch-up with insurgents who use improvised explosives.
In Iraq, there was a nonstop, deadly game between the bombers, who constantly changed the types of detonation triggers, and organizations such as JIEDDO, which was stood up in 2006 to respond to the rapidly rising casualty toll. The triggers and bombs became more and more sophisticated. Simple command wires evolved to remotely controlled devices. When jammers were fielded, insurgents switched to commercially available technologies such as garage door openers which did not rely on radio frequencies. At one point, U.S. military officials counted 90 methods to trigger a roadside bomb.
Eventually, the explosives themselves became more potent. Explosively formed projectiles, designed to penetrate up-armored vehicles, arrived in theater.
As operations in Iraq drew down, and the Afghan surge picked up, the Defense Department’s counter-IED enterprise was again behind the curve, several speakers at the conference said.
Afghan insurgents turned the clock back and began employing “pressure plate explosives,” or victim-activated bombs, an improvised landmine that relies on a person or vehicle stepping or driving on it to trigger the device. Jammers and command-wire detectors do nothing to defeat them. Tragically, Afghan civilians step on the mines as well.
“The threat is very complex in a rudimentary way. I’m not trying to be facetious when I say that,” said Col. Leo Bradley, commander of the Army’s 71st Ordnance Group at Fort Carson, Colo.
Afghan IEDs have a low metallic signature, often employing wood as a casing. They are not technologically sophisticated and use materials that are readily available, he said. They are difficult to find using standard mine detectors that seek out ferrous metals.
“While it looks crude, it’s actually quite sophisticated and matched asymmetrically to what our detection capabilities are,” Bradley said.
The explosives being used include a variety of ordnance, or homemade explosives with a variety of chemical signatures. The triggers “could be electronic or non-electronic. The list goes on and on and on,” he added.
To develop sensors that can identify the key components is a tough technological challenge. “An improvised explosive device is just that, it’s improvised. It doesn’t have a standardized form. You have to be able to identify something that could look like anything … It is a wicked problem.”
The explosives are sometimes made from ammonium nitrate, a common fertilizer found throughout the region. It was most famously employed in the Oklahoma City bombing in 1995. Today, the chemical is banned in Afghanistan in an effort to reduce the amount of material on hand. How effective that ban is in a country where smuggling is rampant is unknown.
Sensors that can pick up nitrate-based explosives are relatively inexpensive and a mature technology. But a field covered in fertilizer creates a lot of clutter. Navy Cmdr. Todd Siddall, deputy commander of Coalition Joint Task Force Paladin, the organization in charge of defeating bomb-making networks in Afghanistan, acknowledged that farmers still use it.
“Are they bad guys? No they are just out there trying to earn a living,” he said.
EOD teams have not only had to contend with changes in enemy tactics, but those being ordered by U.S. Central Command. Navy Capt. Frederick E. Gaghan, chief of the technology requirements division at JIEDDO, said new counter-insurgency strategies that require troops to leave their vehicles and go on foot patrols, also caught the organization off guard. Dismounted operations have resulted in a higher casualty rate, he said.
“We are trying to respond to that,” he added. “One of the issues we have had is trying to identify in advance what the war fighters requirements are,” he said.
Siddall said there are now 14 different handheld devices fielded in Afghanistan used to detect improvised explosive devices. Most of them work well, but imagine a dismounted operation where an EOD team comprising three personnel must carry sensors, a small robot, plastic explosives used to detonate bombs they discover, a radio frequency jammer, not to mention food, water, weapons and ammunition, he said.
Sensors carried into the field include the metal detectors, ground-penetrating radar and a device designed to find hidden tripwires. With all that loading down EOD personnel, the 90-pound protective bomb suits are being left behind, said Siddall. Units have been given lighter robots, but they are not as capable as the larger models, he added.
Coleman said: “We have got to do everything we can to drive down that weight.”
“We are backing into the future. We are meeting today’s needs and today’s gaps as best we can, but we’re not looking over our shoulder to find out what tomorrow’s fight is going to be,” he said.
The EOD community will have to do this in a time of constrained resources, he added. JIEDDO, when he served there, could spend a lot of money to bring forth new counter-IED technology. It didn’t matter how much it cost. Schedule was the primary driver. As long as a vendor could deliver a solution to solve a problem quickly, the funding was there. Now, with fiscal pressures, JIEDDO will be saying, “we need it now, but we won’t be able to pay more,” Coleman said.
Ghagan said JIEDDO will adapt accordingly as its fiscal situation changes. But he believed the organization will continue to exist as long as the improvised explosive threat is around. Globally, the scourge continues unabated, he noted. Putting Iraq and Afghanistan aside, there were about 400 IED incidents every month in 2010 with Pakistan, India, Somalia and Thailand topping the list. Put Iraq and Afghanistan back into the equation, then there were more than 11,500 incidents last year, Ghagan said.
As for responsiveness, JIEDDO does have special working groups that look at future and emerging threats such as the use of lasers as triggering devices, various maritime IEDs “and other things we can’t discuss,” he said.
He pointed to statistics that indicated that JIEDDO’s efforts in Afghanistan are having an impact. While the number of bomb emplacements from October to May held steady, the number of “effective” attacks — ones that caused harm — dropped from 21 percent to 16 percent. While that may not seem like a large decrease, “The drop in a single percentage point means someone is coming home safe,” he added.
Meanwhile, JIEDDO has put out a request for information for robots that can move ahead of dismounted troops and trigger pressure plate explosives before they can do harm. The organization is trying to leverage work done by several Defense Department labs that have developed leader-follower drones designed to carry equipment. It wants to know if this work could be adapted for robots that would move ahead of foot patrols instead of following them. But he acknowledged that fielding such a capability would take many months.
A vendor who asked not to be named because his organization is responding to the RFI, said it will be a hard problem — especially if these robots are intended to be expendable and therefore, inexpensive. A typical ground robot also would either have to be heavy enough to set off an improvised landmine or have some kind of attachment, like a mallet, that would pound the ground.
To get ahead of insurgents’ changing tactics, Edwin Bundy, program manager for EOD programs at the office of the secretary of defense’s combating terrorism technical support office, said he is looking at an Australian program that organizes “fly-away” teams. When a new IED threat emerges, a group of experts is assembled that can travel quickly to investigate and determine what possible solutions can be applied to neutralize the problem.
They can bring back technical requirements based on the operational context that the bombs are in. For example, soil conditions are a factor when it comes to rendering roadside bombs safe. A soil expert could be part of the team.
They may know of a technology that could provide an 80 percent solution in the short term. They also would know what existing technologies are out of reach.
“We would all love to have Tricorder but that is a long ways off,” Bundy said.
Company CEOs draw their inspiration differently, and for George Tunis of Hardwire LLC in Pocomoke City, it’s a jagged piece of metal shrapnel that moves him.
“We keep that around because it helps remind us what those guys are faced with,” said Tunis, the company founder who keeps the black fragment as a visual reminder of the damage that improvised explosive devices can do to American forces in the Middle East.
Founded in 2002, Hardwire is primarily an armor builder, making lightweight, bomb-resistant armor for uses in everything from military vehicles to roofing on Green Zone buildings. From the leftover materials, Hardwire has been making bulletproof clipboards and donating them to Lower Shore law enforcement agencies.
Hardwire also makes armor to protect American bridges from terrorist attacks and makes ultrastrong fabrics that strengthen the structural integrity of buildings. The steel fabrics are especially popular for very old buildings and for structures in earthquake-prone zones.
Hardwire has now teamed up with Humvee maker AM General to make “chimney” upgrades to the vehicles. The design vents the blast from bombs detonated under the Humvee upward, through the center of the vehicle and past the crew inside.
“Our job is to try to stay ahead of the enemy,” Tunis said.
The secret of Hardwire’s armor is the strength of its material: a type of fiber called Dyneema.
Tunis said Dyneema fibers are stronger than spider silk, two-and-a-half times stronger than Kevlar and so light that they float. Hardwire gets Dyneema in by the rolls and presses it with two machines that exert up to 25 million pounds of force, thus strengthening its properties, he said.
DSM, a chemical company based in the Netherlands, makes Dyneema at a plant in Greenville, N.C.
How can the material stop a .44- caliber bullet, even when it’s only as thick as a clipboard? “The material has the highest speed of sound. It spreads the force of the bullet out faster. That’s what makes it so special. I just find it fascinating,” Tunis said.
The company has donated hundreds of clipboards to law enforcement agencies across the Lower Shore. The clipboards, which weigh less than a pound, are made from the leftovers of the war zone armor material.
Firearm experts with the Ocean City Police Department and Wicomico County Sheriff’s Office have tested their clipboards at firing ranges. A single clipboard was shot by different handguns, and even took a hit from a 12-gauge shotgun. The clipboard crumpled but did not break.
“After watching this take five separate rounds, and none of them penetrating, going anywhere from a 9-millimeter to one of the strongest handguns in the world, a .44 Magnum, without penetration, I’m extremely impressed,” said Lt. Todd Richardson, who had his experience shooting the clipboard filmed.
Wicomico Sheriff Mike Lewis said the clipboard is like a shield that can block close-range gunshots. His office received 100 clipboards, enough to cover every single deputy.
“It adds an extra level of protection. Many police are gunned down on traffic stops. They have no protection other than maybe some body armor for their chest,” Lewis said.
At 54 employees strong, the profit-sharing Hardwire isn’t afraid of the giant defense contractors.
“We love being David against Goliath. Our advantage is speed and creativity over what we consider the establishment,” Tunis said.
Tunis said he chose to locate the company in Pocomoke City because there wasn’t any industrial space in Ocean City. Their facility is located across the street from homes and is at a site where the former Campbell’s Soup factory once stood.
“Pocomoke picked me,” Tunis said, adding that company officials are within a half-day’s drive when they must travel to the Pentagon to meet with top military officials — or to Aberdeen Proving Ground to have their products blown up.
Because of the secretive nature of the defense industry, Tunis declined to elaborate on other projects under development, but hinted at several new ones in the pipeline.
For that same reason, Hardwire would not disclose which bridges it has equipped with bomb-resistant materials. Skip Ebaugh, the company vice president, said the armor protects critical elements such as cables and cable abutments.
“We have successfully protected numerous bridges throughout the Northeast, including in New York City,” Ebaugh said.
In addition to bridges, Hardwire products are used to strengthen buildings.
Paolo Casadei, the technical director of FIDIA Technical Global Services in Perugia, Italy, leads a team of engineers who have retrofitted old buildings in Italy with Hardwire products.
Casadei said the most famous structure to receive an upgrade is located in the Piazza della Signoria in Florence, although hospitals, schools and public structures have also been upgraded.
He said buildings must be retrofitted, in part, because “all Italian territory and most of Mediterranean countries are earthquake-prone zones, so due to their age and sometimes poor design and construction,” Casadei said by email.
In a crucial step towards the development of self-powering portable electronics, RMIT University researchers have for the first time characterised the ability of piezoelectric thin films to turn mechanical pressure into electricity.
The pioneering result has been published in the leading materials science journal, Advanced Functional Materials.
Lead co-author Dr Madhu Bhaskaran said the research combined the potential of piezoelectrics – materials capable of converting pressure into electrical energy – and the cornerstone of microchip manufacturing, thin film technology.
“The power of piezoelectrics could be integrated into running shoes to charge mobile phones, enable laptops to be powered through typing or even used to convert blood pressure into a power source for pacemakers – essentially creating an everlasting battery,” Dr Bhaskaran said.
“The concept of energy harvesting using piezoelectric nanomaterials has been demonstrated but the realisation of these structures can be complex and they are poorly suited to mass fabrication.
“Our study focused on thin film coatings because we believe they hold the only practical possibility of integrating piezoelectrics into existing electronic technology.”
The Australian Research Council-funded study assessed the energy generation capabilities of piezoelectric thin films at the nanoscale, for the first time precisely measuring the level of electrical voltage and current – and therefore, power – that could be generated.
Dr Bhaskaran co-authored the study with Dr Sharath Sriram, within RMIT’s Microplatforms Research Group, which is led by Professor Arnan Mitchell. The pair collaborated with Australian National University’s Dr Simon Ruffell on the research.
“With the drive for alternative energy solutions, we need to find more efficient ways to power microchips, which are the building blocks of everyday technology like the smarter phone or faster computer,” Dr Bhaskaran said.
“The next key challenge will be amplifying the electrical energy generated by the piezoelectric materials to enable them to be integrated into low-cost, compact structures.”