jueves, 22 de septiembre de 2016

How to be a leader in the digital age


How can leaders benefit from digital disruption and breakthrough technologies? Image: REUTERS/Kacper Pempeght companions.

The coming years will be a time of “Digital Leaders”.

Around the world, leaders in different fields have already started to embrace the digital revolution and recognize the power of game-changing technology.

“Every country needs a Minister of the Future,” said Saleforce’s founder and CEO Marc Benioff, at the World Economic Forum in Davos this year. And he was right.

1. But what does leadership actually mean?

There is a plethora of literature on leadership, but only some of it addresses an issue of how disruptive technologies can define the new wave of leaders in today’s world.

Before we move on to digital leadership, we should take a step back and look at what leadership means in general and whether universal characteristics of leadership apply to the fast-changing world of disruptive technologies.

Different ages require different kinds of leadership, but many leading theorists claim that there are certain universal characteristics that are timeless.

First, personal charisma. A charismatic person possesses a rare gift that allows them to influence followers while inspiring loyalty and obedience.

However, Max Weber predicted a decline in charismatic leadership in what he described as “routinization”

Arguably he was right, especially in the Western World where charismatic leadership over the years has been, to some extent, “succeeded by a bureaucracy controlled by a rationally established authority or by a combination of traditional and bureaucratic authority”.

This process is evident in the European Union’s bureaucratic system, where politicians are often accused of being unable to take brave and visionary decisions.

A huge system of checks and balances and the competing national interests of 28 member states makes it harder for high-ranked officials to act decisively.

Even those who possess natural charisma are not able to pursue their right course of action because they are forced to balance various interests, maintain order and seek consensus.

Margaret Thatcher once described European leaders as being “weak” and “feeble”; the same, unfortunately, could be said about a number of leaders in Europe today. It is because their personal charisma, if they ever had it, has been silenced by bureaucracy.

Second, aside from ‘inner’ or personal levels of leadership, there is also an ‘outer’ or behavioral level which relates to how leaders deliver results, according to more integrated psychological theory. There are several universal skills that are worth mentioning, such as: (1) motivational skills; (2) team building; (3) emotional intelligence.

Obviously, this list of skills is not exhausted but indicates the core abilities required to deliver successful results.

And although these ‘outer’ characteristics have largely remained the same, there are also a few which have changed substantially due to the unprecedented impact of technology.

The human impact of technology

We live in a world of rapidly advancing technology which is influencing lives like never before.

Digital technology is transforming politics, businesses, economies and society, as well as our day-to-day lives.

Digital technology has not only broken down the old, familiar models of organizations, but has also created a broad set of new challenges.

The best example of transformative change is probably within the space industry.

In 2015, we could observe how SpaceX's Falcon 9 rocket landed safely at Cape Canaveral, which was hailed almost immediately as another giant leap for mankind.

Reusable rockets are a fantastic business opportunity, a source of entertainment and, more importantly, another step forward in the commercialization of space travel and ultimately toward a possible colonization of other planets.

Back here on earth, we can’t deny that our world is changing as never before.

Technological revolution is evident and examples of our new reality abound.

The most popular social media creates no content (Facebook), the fastest growing banks have no actual money (SocietyOne), the world’s largest taxi company owns no taxis (Uber), and the largest accommodation provider owns no real estate (Airbnb).

Today’s game changers drive with completely different fuel and sometimes – as the above examples clearly indicate – they revolutionize even the most basic characteristics of particular industries.

On a conceptual level, the Digital Age - called sometimes the knowledge society or networked society - is marked by several key structural changes that are reshaping leadership: (1) rapid and far-reaching technological changes, (2) globalization leading to the dynamic spread of information; (3) a shift from physical attributes toward knowledge and (4) more dispersed, less hierarchical organizational forms of organization.

The impact of the Digital Age on leadership

Traditional skills have not been supplanted but they now co-exist with a mix of new factors.

First of all, digital leadership can be defined by a leader’s contribution to the transition toward a knowledge society and their knowledge of technology.

Digital leaders have an obligation to keep up with the ongoing global revolution.

They must understand technology, not merely as an enabler but also for its revolutionary force.

Leadership must be driven by an attitude of openness and a genuine hunger for knowledge.

Of course, no rule dictates that leaders must be literate in coding or that they graduated from machine-learning but yes, there is an imperative to understand the impact of breakthrough technologies.

Today’s leaders must have the ability to identify technological trends across different sectors, such as big data, cloud computing, automation, and robotics.

However, first and foremost they must possess sufficient knowledge and the vision to use these resources most effectively.

 


Secondly, in a knowledge society, what we do not know is as important as what we do know. Leaders should know their limits and know how to acquire missing knowledge.

A leader of the future is more like a community manager rather than an authoritarian.

These days, we are observing the decline of traditional hierarchical models of organization.

Take a look at how the organization of governments has changed across Western societies in recent years.

A number of governments have introduced or reinforced public consultation processes as well as opened up public data for the benefit of their citizens.

These processes, by and large, will continue to grow.

As a result, the hierarchical model tends to be suppressed and replaced by horizontal structures among executives, leaders from different sectors, researchers and representatives from civic society.

Hierarchy fails in the digital age because it’s slow and bureaucratic, whereas the new world is constantly changing and requires immediate responses.

Information is key. In today’s world, power is not gained by expanding new territories or areas of influence but by deepening and widening networks and connections.

But what is the role of the individual or leader, or of qualities that distinguish one grain of sand from another?

Why leaders should turn their attention to tech for good

We have to shift our focus from the threat of new technologies to the opportunities they bring.

Of course, we cannot ignore the threat of new technologies.

In India, for example, around 850 government websites have been hacked since 2012.

Meanwhile, hackers recently breached US Government networks and stole more than 5.6 million fingerprint records from the Office of Personnel Management (OPM).

And the government is not always a victim, they can also be the predator.

Not long ago, Twitter warned a number of users that they may have been the target of a state-sponsored attack.

The debate concerning the threat of technologies, especially the internet, will never end. Policymakers have proposed different ways of regulating the web, but they always are one or two steps behind.

This is because law and regulations are stable and designed to be long-lasting, whereas the digital environment is changing rapidly.

As Hugh Fiennes, CEO of Electric Imp, puts it:

"The reality seems to be that when it comes to the internet-connected device there is no such thing as absolute security. Your device can start by being secure today and then not be secure tomorrow."

We do not claim that regulation is purely ineffective, and thus we should abandon any legal solutions for creating a more secure environment.

But we do suggest that we look at technologies through different lenses.

We can transform the one thing that is good and bad in breakthrough technologies - the human factor.

Having acknowledged that digital technology will play a decisive role our future, leaders cannot afford to show fear or reluctance in implementing it. Instead, they must embrace technology with a clear view of its potential.

We must set sail for new, ambitious lands.

We choose to go to Mars because our technology enables us to at least attempt the exploration on other planets by the 2030s.

And we choose to develop other fantastic things every day – self-driving cars, more powerful batteries, the Apple Watch, drones – to name only just a few.

A balanced mix of universal characteristics and digital leadership traits has the potential to guide us through years of transformation with optimism and idealism. Technology continues to prove that it can be used for the benefit of mankind, but only if we set sail on the right course and with the ri

weforum.org

martes, 7 de junio de 2016

Former NASA Chief Develops Brain-Like Chips

KnuEdge Chief Executive Dan Goldin, NASA Administrator from 1992-2001, at an agency symposium in 2010.

KnuEdge Chief Executive Dan Goldin, NASA Administrator from 1992-2001, 
at an agency symposium in 2010. 
PHOTO: NASA/BILL INGALLS

Dan Goldin’s startup, KnuEdge, has been working in secret for 10 years on a new kind of computing that mimics the human brain

Dan Goldin spent nine years as chief of the National Aeronautics and Space Administration, overseeing complex projects like the international space station. Now he’s preparing for another challenging launch: a startup that has been working in secret for 10 years on a form of brain-like computing.

His San Diego-based company, KnuEdge, has developed an unusual processor chip and related hardware and software, aiming to bring dramatic speed improvements to tough chores like finding patterns in images, sounds and financial data. KnuEdge is disclosing its plans for the first time Monday, along with a parallel effort in software to help recognize speech in noisy environments.

Many companies are working in related areas, often loosely categorized under the term artificial intelligence. Most are using software that runs on time-tested varieties of computer hardware, not designing chips from scratch.

But there is a widening recognition that conventional processors may not be the best tools for all computing jobs. Nor are they increasing in speed as quickly as they once did, in part because of diminishing returns from miniaturizing circuitry under the pattern known as Moore’s Law.

In one high-profile effort, Google Inc. recently disclosed it is using an internally designed chip for chores related to machine learning, a branch of artificial intelligence, along with chips from Intel Corp. and graphical processing units from Nvidia Corp. International Business Machines Corp. also opted to design its own processor for specialized tasks that emulates aspects of the brain.

Mr. Goldin, who led the space agency from 1992 to 2001, had a similar inspiration. He started by studying with Nobel laureate Gerald Edelman, a prominent theorist on the workings of the brain who died in 2014.


“I was sick and tired of hearing the world is ending, Moore’s Law is dead,” he said. “We need to bring in neurobiology.”

Any kind of chip startup is a rarity these days, as venture capitalists prefer investments that take less time and money to produce a return. The 75-year-old Mr. Goldin, KnuEdge’s chief executive, said he managed to raise $100 million from individuals with the patience to wait for a payoff. KnuEdge hired expert talent slowly and selectively, he said, but the company now has 100 employees.

Conventional chips outperform humans at chores like high-speed mathematics and sifting through large databases. However, humans are far better at tasks like distinguishing one face or voice from another, partly because of the brain’s parallel structure.

Where a typical Intel chip might have one to a dozen or so processing cores, Mr. Goldin estimated that people have a couple of hundred billion neurons—the cells that process and transmit information. Each may be able to communicate to 10,000 to 100,000 others dispersed through the brain, he said.

KnuEdge’s first chip has 256 processor cores, each of which can be working on a different program—unlike some other chips with many cores that execute the same kind of instruction at once, Mr. Goldin said. It also developed a companion communication technology that can link up to 512,000 of the chips at extremely high speed.

“It is the ability to scale to unprecedented levels that makes us different,” Mr. Goldin said.

Mr. Goldin isn’t disclosing customers that have received the first circuit boards containing the KnuEdge chips.

But he expects financial-service firms may be particularly interested, for purposes such as analyzing patterns of transactions to prevent fraud. KnuEdge expects to begin selling boards and chassis based on the technology near the end of the third quarter.

Few outsiders have seen the technology in action. But Jim McGregor, an analyst at Tirias Research briefed on the company’s plans, said he expects such novel chip designs will play an important future role in tasks like machine learning.

“I think Dan is a pioneer of the inevitable future architecture trends in computer engineering,” said Larry Smarr, director of the California Institute for Telecommunications and Information Technology, which expects to include KnuEdge’s hardware in a lab that will study pattern-recognition technology. “I think all major companies will be doing this.”

Don Clark

wsj.com

Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm

Stuart Armstrong

The Future of Humanity Institute, University of Oxford

Stuart Armstrong is a philosopher at the University of Oxford and one of the paper's authors.

Machines are becoming more intelligent every year thanks to advances being made by companies like Google, Facebook, Microsoft, and many others.

AI agents, as they're sometimes known, can already beat us at complex board games like Go, and they're becoming more competent in a range of other areas.

Now a London artificial-intelligence research lab owned by Google has carried out a study to make sure that we can pull the plug on self-learning machines when we want to.

DeepMind, bought by Google for a reported 400 million pounds — about $580 million — in 2014, teamed up with scientists at the University of Oxford to find a way to make sure that AI agents don't learn to prevent, or seek to prevent, humans from taking control.

The paper — "Safely Interruptible Agents PDF," published on the website of the Machine Intelligence Research Institute (MIRI) — was written by Laurent Orseau, a research scientist at Google DeepMind, Stuart Armstrong at Oxford University's Future of Humanity Institute, and several others.

The researchers explain in the paper's abstract that AI agents are "unlikely to behave optimally all the time." They add:

If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.

The researchers, who weren't immediately available for interviewing, claim to have created a "framework" that allows a "human operator" to repeatedly and safely interrupt an AI, while making sure that the AI doesn't learn how to prevent or induce the interruptions.

The authors write:

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.

The researchers found that some algorithms, such as "Q-learning" ones, are already safely interruptible, while others, like "Sarsa," aren't when they're off the shelf, but they can be modified relatively easily so they are.

"It is unclear if all algorithms can be easily made safely interruptible," the authors admit.

Nick Bostrom

srf
University of Oxford philosopher Nick Bostrom

DeepMind's work with the Future of Humanity Institute is interesting: DeepMind wants to "solve intelligence" and create general purpose AIs, while the Future of Humanity Institute is researching potential threats to our existence.

The institute is led by Nick Bostrom, who believes that machines will outsmart humans within the next 100 years and thinks that they have the potential to turn against us.

Speaking at Oxford University in May 2015 at the annual Silicon Valley Comes to Oxford event, Bostrom said:

I personally believe that once human equivalence is reached, it will not be long before machines become superintelligent after that. It might take a long time to get to human level but I think the step from there to superintelligence might be very quick.

I think these machines with superintelligence might be extremely powerful, for the same basic reasons that we humans are very powerful relative to other animals on this planet.

It's not because our muscles are stronger or our teeth are sharper, it's because our brains are better.

DeepMind knows the technology that it's creating has the potential to cause harm.

The founders — Demis Hassabis, Mustafa Suleyman, and Shane Legg — allowed their company to be bought by Google on the condition that the search giant created an AI ethics board to monitor advances that Google makes in the field.

Who sits on this board and what they do, exactly, remains a mystery.

The founders have also attended and spoken at several conferences about ethics in AI, highlighting that they want to ensure the technology they and others are developing is used for good, not evil.

It's likely that they will look to incorporate some of the findings from the "Safely Interruptible Agents" paper into their work going forward.

Sam Shead


sábado, 5 de abril de 2014

$100-Million Federal Lab Will Bring More Science to Bear on Global Poverty and Development

Rajiv Shah

Rajiv Shah, director of the Agency for International Development, is increasing science funding at his agency. 
Credit: US Government (Shah) and (USAID Emblem)


By Virginia Gewin and Nature magazine

The USAID's Global Development Lab will fund research into technological solutions for target problems

The Agency for International Development (USAID) today announced the launch of its $100-million Global Development Lab in Washington DC — a move that will elevate the role of science at the agency.

USAID says that it will put in place a research-and-development pipeline for food security and nutrition, maternal and child survival, energy access and sustainable water solutions.

The announcement marks a shift in USAID’s approach to global development, from funding organizations to meet specific goals with existing technologies to instead identifying problems and funding research into new technological interventions to solve them.

USAID director Rajiv Shah describes the lab as “DARPA for development”, referring to the Defense Advanced Research Project Agency (DARPA), which takes a high-risk, purpose-driven approach to developing military technology.

“At DARPA, they define needs they are trying to meet — and find partners best suited to help find solutions, test them and scale them,” he says.

That means that instead of equipping 100 hospitals with a certain type of respirator for infants, for example, the Global Development Lab would support research into innovations that might improve child survival within 48 hours of birth. Working with partners from industry and academia, the lab may fund several pilot projects, then evaluate the results and choose the most promising approaches for further investment.

“We are changing the way we work,” says Shah.

Merging interests

The emphasis on research has not been seen at the agency for several decades, says Molly Elgin-Cossart, a senior fellow in global development at the Center for American Progress in Washington DC.

In the 1970s, for example, USAID funded efforts such as the Cholera Research Laboratory in Bangladesh, which developed oral rehydration salts for the treatment of diarrhoea.

But in the late 1990s, the agency was downsized under pressure from Congress to reduce foreign aid.

Research is on the rise again, says Elgin-Cossart. Shah says that investments in science and technology across the agency have increased from $127 million in 2008 to $611 million today.

The lab now employs 26 scientists, with expertise ranging from bioengineering to tropical ecology. The lab is hiring more scientists and searching for a new chief scientist to replace Alex Dehgan, who in 2010 became the first chief scientist in two decades.

He left in December. The lab is also launching a 4-year fellowship program that, like DARPA, aims to rotate mid-career scientists through the agency.

Dehgan says that the agency needs scientific leadership at all levels of the organization, not only to generate ideas but also to recognize successes.

Elgin-Cossart says that the shift makes sense, given that development is no longer just a matter of donation from foreign governments.

Private investment and public spending by domestic governments now have much larger roles. With more organizations acting independently, development is also more complex than it used to be, she says.

To that end, the lab has already enlisted 32 organizations as ‘cornerstone’ partners to help to develop and scale up innovations the lab funds.

Together, these partners — which include companies, non-profit organizations and universities — spend roughly $30 billion on research and development for emerging markets.

One partner is Coca-Cola, based in Atlanta, Georgia, which has a vested interest in water supplies. And so, for example, USAID could encourage the company to support clean-water innovations.

Some observers are worried that the pendulum at USAID may be swinging too quickly in the direction of research.

Casey Dunning, a senior policy analyst at the Center for Global Development, a think tank in Washington DC, says that the wider development community may not have the scientific expertise to hold USAID to account for its spending.

And Amanda Glassman, director of global health policy at the Center for Global Development, says that USAID has limited experience in identifying promising technologies.

“I’m sceptical of a government agency’s capacity to pick a winner from a portfolio,” she says.

Dehgan, one of the architects behind the lab, acknowledges that development depends on more than just science and technology.

But he says that they are increasingly important, given the technical nature of current challenges such as food security, climate change and energy.

While others view the shift toward research as a gamble, Deghan says that it's a move that USAID has to make, he says.

“If the initiative fails, we will see an agency that risks its very relevancy in the next decade.”

This article is reproduced with permission from the magazine Nature.
The article was first published on April 3, 2014.

scientificamerican.com



domingo, 14 de abril de 2013

Has An Iranian Scientist Named Ali Razeghi Invented A 'Time Machine'?




An Iranian inventor recently claimed he created a "time machine," according to reports. But the Internet is skeptical, and with good reason.
The Telegraph caused a stir Wednesday with a story about a young Tehran-based scientist, Ali Razeghi, and an invention he calls "The Aryayek Time Traveling Machine." 
Reportedly something of a mad scientist, Razeghi claimed the device, which "easily fits into the size of a personal computer case," can predict with 98-percent accuracy the future five to eight years of an individual's life.
The Telegraph cited an earlier story, in Farsi, by Iranian news agency Fars
However, The Washington Post reports that Fars quietly deleted the story, even as it began to go viral among Western media outlets. (Fars' link is now dead.) 
The Atlantic Wire points out that the story never even made it to the Science section on the site's English-language side.
A separate interview with Razeghi was published in Farsi by Iranian news site Entekhab. 
The story says Razeghi is a supervisor at Iran's Center for Strategic Inventions and Inventors and claims that his baffling invention won't be available for another few years, at least. 
"We're waiting for conditions to improve in Iran," Razeghi told the outlet, according to a translation by The Huffington Post.
Razeghi was coy during the interview, refusing to give out many details because he was worried his idea would be stolen and reproduced by China. 
He did say, however, that his device incorporates both hardware and software components, and that it cost roughly 500,000 Iranian tomans (about $400). 
When asked whether he was worried the machine might cause problems, he said he envisions it used selectively, to tell a couple the future sex of their child, for example.
Neither Iran nor Razeghi have publicly responded to the report.
Radio Free Europe writes that "most Iran watchers will be treating his announcement with a certain amount of skepticism," in light of a recent flap that involved a Photoshopped picture of Iran's Qaher-313 stealth fighter jet.
Scientists around the world have made previous claims (some dubious, some less so) about their own "time machine" inventions. 
In 2009, a man named Steve Gibbs, of Clearwater, Neb., said he had invented a "hyperdimensional resonator," which he claimed could be used for "out-of-body time travel," according to the Examiner.
More recently, in 2011, physicists from Cornell University in Ithaca, N.Y., announced they had developed a "time cloak" that they say can hide events for trillionths of a second.

huffingtonpost.com

RoboBees Will Soon Sting You- Scientists Developing Flying-bee Size Robots to Pollinate Flowers



(Photo : scientificamerica )
There is a huge fall in the honeybee population due to mysterious affliction called Colony Collapse Disorder (CCD, which began to wipe out honeybee hives. 
These bees control most commercial pollination in the U.S., and their loss creates low production of crops due to low rate of pollination. In 2009, teams of Harvard and Northeastern University scientists considered seriously to create a robotic bee colony. 
Now these teams worked on a swarm of tiny RoboBees- flying-bee size robots- that could pollinate flowers and do the job of real bees if needed.
According to Inhaitat, "In 2009 the three of us began to seriously consider what it would take to create a robotic bee colony," the team leaders told Scientific America.
"We wondered if mechanical bees could replicate not just an individual's behavior but the unique behavior that emerges out of interactions among thousands of bees. 
We have now created the first RoboBees-flying bee-size robots-and are working on methods to make thousands of them cooperate like a real hive," they added.

Objectives of RoboBees Project



RoboBees size presents a huge variety of physical and computational trials. The bees, to power and control flight, must employ specially designed artificial muscles to overcome their inefficient motors and bearings.
In addition, the tiny bees must have freedom. 
In other words, they must freely think and act, using miniature sensors.
The biomimicking scheme, also known as the Micro Air Vehicles Project, aims to "push advances in miniature robotics and the design of compact high-energy power sources; spur innovations in ultra-low-power computing and electronic 'smart' sensors; and refine coordination algorithms to manage multiple, independent machines".
Like real bees, RoboBees will work best when employed as swarms of thousands of individuals, coordinating their actions without relying on a single leader. 
The hive must be spirited enough so that the group can complete its objectives even if many bees fail.
Moreover, scientists can use RoboBees for search and rescue, hazardous environment exploration such as a nuclear disaster sites, high-resolution weather and climate mapping and traffic monitoring.
(Source- Scientific America)
designntrend.com

Freed From Its Cage, the Gentler Robot




Rethink Robotics

The Baxter robot from Rethink Robotics.



FACTORY robots are usually caged off from humans on the assembly line lest the machines’ powerful steel arms deliver an accidental, bone-crunching right hook.


But now, gentler industrial robots, designed to work and play well with others, are coming out from behind their protective fences to work shoulder-to-shoulder with people. 
It’s an advance made possible by sophisticated algorithms and improvements in sensing technologies like computer vision.
The key to these new robots is the ability to respond more flexibly, anticipating and adjusting to what humans want. 
That is in contrast to earlier generations of robots that often required extensive programming to change the smallest details of their routine, saidHenrik Christensen, director of the robotics program at the Georgia Institute of Technology.
“Researchers in labs worldwide are building robots that can predict what you’ll do next and be ready to give you the best possible assistance,” he said.
One of those researchers is Julie A. Shah, an assistant professor in the department of aeronautics and astronautics at the Massachusetts Institute of Technology. Dr. Shah once taught robots to do tasks the old way: by hitting a button that essentially told them “good,” “bad” or “neutral” as they did each part of a job. Now she has added a technique called cross-training, in which robots and humans exchange roles, learning a thing or two from each other in the process.
In a recent study, Dr. Shah and a student had human-robot teams perform a chore borrowed from the assembly line: the humans placed screws and the robots did the drilling. Then the teammates exchanged jobs and the robots observed the humans drill.
“The robot gathers information on how the person does the drilling,” adding that information to its algorithms, Dr. Shah said. “The robot isn’t learning one optimal way to drill. Instead it is learning a teammate’s preferences, and how to cooperate.”
When the cross-trained teams resumed their original roles, both robots and people did their jobs more efficiently, the study found. 
The time that the humans were idle while waiting for the robot to finish a task dropped 41 percent and the time that humans and robots worked simultaneously increased 71 percent, when compared with teams working with robots trained the old way.
“This is a fascinating application of cross-training,” said Andrea Thomaz, an assistant professor of interactive computing at Georgia Tech. 
“By learning the human’s role, the robot can better anticipate actions and be a better partner, even if in the end it will only do one role.”
The humans on the teams also improved their teamwork skills, said Illah R. Nourbakhsh, professor of robotics at Carnegie Mellon University and author of the book “Robot Futures,” published this month by M.I.T. Press. 
Prof. Julie Shah of M.I.T. and two graduate students, Ron Wilcox and Matthew Gombolay, ran a cross-training experiment.

“In the future, this idea of cross-training will turn out to be really important as robots start to work shoulder-to-shoulder with us,” he said. 
“We are not very good at adopting the point of view of a robot. 
This study showed that we can learn, though, with the right signals.”
Dr. Christensen of Georgia Tech said: “Robots of the future won’t just be in manufacturing. 
Almost any area could have a robot that would help make our life easier,” whether “lifting patients in hospital beds or helping at home.
“But they have to be safe, and they have to have the kind of anticipation that Julie Shah is working on, because they have to be able to automatically figure out what we need help with,” he said.
Gentle, helpful robots aren’t just being created in labs; they are also arriving in the marketplace. 
Since January, Rethink Robotics of Boston has been sending customers its two-armed robot called Baxter, which can work uncaged, moving among people. 
“We are shipping robots every day and have a backlog of orders of about three months,” saidRodney Brooks, Rethink’s founder, chairman and chief technology officer.
Baxter, which costs $22,000, can lift objects from a conveyor belt. “You don’t have to tell it the exact velocity,” Dr. Brooks said. “It sees objects and grabs them, matching its speed to the speed of the object.”
Baxter is used in manufacturing plants and shops of varying sizes. One example is theRodon Group, a plastic injection molding company in Hatfield, Pa., where Baxter packs boxes on the factory floor.
Baxter’s cameras inspect what is to be lifted, recognizing an object from many angles. In the coming year, Baxter will be able to grab objects not only from above, but also from the side, putting them into a milling machine, for example, and pressing the “go” button. It will also be able to connect with other machines, to synchronize tasks.
“Baxter is a great starting point for this new generation of robots,” said Dr. Christensen of Georgia Tech, who has no connection to Rethink Robotics’ work, “making the technology accessible to companies that before would have had to pay hundreds of thousands of dollars.”
“He’s opening up a new market,” Dr. Christensen said of Baxter’s work.
Baxter is not the only unfenced robot on the assembly line. A Danish company, Universal Robots, for example, sells a one-armed robot for $33,000 that can also be used without a cage.
IMPRESSIVE as the new robots are, they will soon have even more advanced skills, said Stefan Schaal, a professor of computer science, neuroscience and biomedical engineering at the University of Southern California and a director of the Max Planck Institute for Intelligent Systems in Germany. In the future, robots will be able to go onto the Internet and exchange information, leading to vast gains in what they can accomplish.
“It will take time before we get there,” he Schaal said, “but it will happen.”
nytimes.com