Robots learn to use kitchen tools by watching YouTube videos

150113090600_1_540x360

Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) partnered with a scientist at the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) to develop robotic systems that are able to teach themselves. Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can “think” for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.

https://www.sciencedaily.com/releases/2015/01/150113090600.htm

Paper; http://www.umiacs.umd.edu/~yzyang/paper/YouCookMani_CameraReady.pdf

Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web

Yezhou Yang University of Maryland yzyang@cs.umd.edu

Yi Li NICTA, Australia yi.li@nicta.com.au

Cornelia Fermuller ¨ University of Maryland fer@umiacs.umd.edu

Yiannis Aloimonos University of Maryland yiannis@cs.umd.edu

Abstract

In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.

Introduction

The ability to learn actions from human demonstrations is one of the major challenges for the development of intelligent systems. Particularly, manipulation actions are very challenging, as there is large variation in the way they can be performed and there are many occlusions. Our ultimate goal is to build a self-learning robot that is able to enrich its knowledge about fine grained manipulation actions by “watching” demo videos. In this work we explicitly model actions that involve different kinds of grasping, and aim at generating a sequence of atomic commands by processing unconstrained videos from the World Wide Web (WWW). The robotics community has been studying perception and control problems of grasping for decades (Shimoga 1996). Recently, several learning based systems were reported that infer contact points or how to grasp an object from its appearance (Saxena, Driemeyer, and Ng 2008; Lenz, Lee, and Saxena 2014). However, the desired grasping type could be different for the same target object, when used for different action goals. Traditionally, data about the grasp has been acquired using motion capture gloves or hand trackers, such as the model-based tracker of (Oikonomidis, Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Kyriazis, and Argyros 2011). The acquisition of grasp information from video (without 3D information) is still considered very difficult because of the large variation in appearance and the occlusions of the hand from objects during manipulation. Our premise is that actions of manipulation are represented at multiple levels of abstraction. At lower levels the symbolic quantities are grounded in perception, and at the high level a grammatical structure represents symbolic information (objects, grasping types, actions). With the recent development of deep neural network approaches, our system integrates a CNN based object recognition and a CNN based grasping type recognition module. The latter recognizes the subject’s grasping type directly from image patches. The grasp type is an essential component in the characterization of manipulation actions. Just from the viewpoint of processing videos, the grasp contains information about the action itself, and it can be used for prediction or as a feature for recognition. It also contains information about the beginning and end of action segments, thus it can be used to segment videos in time. If we are to perform the action with a robot, knowledge about how to grasp the object is necessary so the robot can arrange its effectors. For example, consider a humanoid with one parallel gripper and one vacuum gripper. When a power grasp is desired, the robot should select the vacuum gripper for a stable grasp, but when a precision grasp is desired, the parallel gripper is a better choice. Thus, knowing the grasping type provides information for the robot to plan the configuration of its effectors, or even the type of effector to use. In order to perform a manipulation action, the robot also needs to learn what tool to grasp and on what object to perform the action. Our system applies CNN based recognition modules to recognize the objects and tools in the video. Then, given the beliefs of the tool and object (from the output of the recognition), our system predicts the most likely action using language, by mining a large corpus using a technique similar to (Yang et al. 2011). Putting everything together, the output from the lower level visual perception system is in the form of (LeftHand GraspType1 Object1 Action RightHand GraspType2 Object2). We will refer to this septet of quantities as visual sentence. At the higher level of representation, we generate a symbolic command sequence. (Yang et al. 2014) proposed a context-free grammar and related operations to parse manipulation actions. However, their system only processed RGBD data from a controlled lab environment. Furthermore, they did not consider the grasping type in the grammar. This work extends (Yang et al. 2014) by modeling manipulation actions using a probabilistic variant of the context free grammar, and explicitly modeling the grasping type. Using as input the belief distributions from the CNN based visual perception system, a Viterbi probabilistic parser is used to represent actions in form of a hierarchical and recursive tree structure. This structure innately encodes the order of atomic actions in a sequence, and forms the basic unit of our knowledge representation. By reverse parsing it, our system is able to generate a sequence of atomic commands in predicate form, i.e. as Action(Subject, P atient) plus the temporal information necessary to guide the robot. This information can then be used to control the robot effectors (Argall et al. 2009). Our contributions are twofold. (1) A convolutional neural network (CNN) based method has been adopted to achieve state-of-the-art performance in grasping type classification and object recognition on unconstrained video data; (2) a system for learning information about human manipulation action has been developed that links lower level visual perception and higher level semantic structures through a probabilistic manipulation action grammar.

 

Cleaning air with light

es-2014-012687_0009

Paper: http://pubs.acs.org/doi/abs/10.1021/es5012687

Gas-Phase Advanced Oxidation for Effective, Efficient in Situ Control of Pollution

Matthew S. Johnson*†, Elna J. K. Nilsson†, Erik A. Svensson†, and Sarka Langer∥
† Department of Chemistry, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark
∥ Department of Chemistry and Materials Technology, SP Technical Research Institute of Sweden, Box 857, SE-501 15 Borås, Sweden
Environ. Sci. Technol., 2014, 48 (15), pp 8768–8776
DOI: 10.1021/es5012687
Publication Date (Web): June 23, 2014

Abstract

In this article, gas-phase advanced oxidation, a new method for pollution control building on the photo-oxidation and particle formation chemistry occurring in the atmosphere, is introduced and characterized. The process uses ozone and UV-C light to produce in situ radicals to oxidize pollution, generating particles that are removed by a filter; ozone is removed using a MnO2 honeycomb catalyst. This combination of in situ processes removes a wide range of pollutants with a comparatively low specific energy input. Two proof-of-concept devices were built to test and optimize the process. The laboratory prototype was built of standard ventilation duct and could treat up to 850 m3/h. A portable continuous-flow prototype built in an aluminum flight case was able to treat 46 m3/h. Removal efficiencies of >95% were observed for propane, cyclohexane, benzene, isoprene, aerosol particle mass, and ozone for concentrations in the range of 0.4–6 ppm and exposure times up to 0.5 min. The laboratory prototype generated a OH• concentration derived from propane reaction of (2.5 ± 0.3) × 1010 cm–3 at a specific energy input of 3 kJ/m3, and the portable device generated (4.6 ± 0.4) × 109 cm–3 at 10 kJ/m3. Based on these results, in situ gas-phase advanced oxidation is a viable control strategy for most volatile organic compounds, specifically those with a OH• reaction rate higher than ca. 5 × 10–13 cm3/s. Gas-phase advanced oxidation is able to remove compounds that react with OH and to control ozone and total particulate mass. Secondary pollution including formaldehyde and ultrafine particles might be generated, depending on the composition of the primary pollution.

News:

http://news.ku.dk/all_news/2016/03/cleaning-air-with-light-proven-to-be-uniquely-versatile/

A novel invention using light to remove air pollution has proven to be more versatile than any competing systems. It eliminates fumes as chemically diverse as odorous sulfur compounds and health hazardous hydrocarbons while consuming a minimum of energy.
Matthew Johnson, Professor at Department of Chemistry, University of Copenhagen, and inventor of the GPAO air cleaning technology

The name of the air cleaner is GPAO and its inventor, Professor of environmental chemistry Matthew Johnson, University of Copenhagen, Denmark, published the results of testing the system in the article “Gas Phase Advanced Oxidation for effective, efficient In Situ Control of Pollution” in the scientific periodical “Environmental Science and Technology”.

Air pollution hard to remove
Pollution is notoriously difficult to remove from air. Previous systems trying to control air pollution consume large amounts of energy, for example because they burn or freeze the pollution. Other systems require frequent maintenance because the charcoal filters they use need replacement. GPAO needs no filters, little energy and less maintenance, explains atmosphere chemist Matthew Johnson.

“As a chemist, I have studied the natural ability of the atmosphere to clean itself. Nature cleans air in a process involving ozone, sunlight and rain. Except for the rain, GPAO does the very same thing, but speeded up by a factor of a hundred thousand”, explains Johnson.

Gas is difficult to remove. Dust is easy
In the GPAO system, the polluted gas is mixed with ozone in the presence of fluorescent lamps. This causes free radicals to form that attack pollution, forming sticky products that clump together. The products form fine particles which grow into a type of airborn dust. And whereas gas phase pollution was hard to remove, dust is easy. Just give it an electrostatically charged surface to stick to, and it goes no further.

“Anyone who has ever tried dusting a computer screen knows how well dust sticks to a charged surface. This effect means that we don’t need traditional filters, giving our system an advantage in working with large dilute airstreams”, says Johnson.

See the principle of GPAO in this animation
Removes foul smells as well as noxious fumes
Patented in 2009, the system has been commercialized since 2013 and is already in use at an industrial site processing waste water. Here it eliminates foul smells from the process and saved the plant from being closed. A second industrial installation removes 96% of the smell generated by a factory making food for livestock. Further testing by University of Copenhagen atmospheric chemists have shown that the GPAO system efficiently removes toxic fumes from fiberglass production and from an iron foundry, which emitted benzene, toluene, ethyl benzene and xylene.

Vira, fungal spores and bacteria removed in bonus effect
Another series of tests revealed that the rotten egg smells of pig farming and wastewater treatment are easily removed. Odors such as the smells from breweries, bakeries, food production, slaughterhouses and other process industries can be eliminated. And that is not all, says Professor Johnson.

“Because the system eats dust, even hazardous particles such as pollen, spores and viruses are removed” states Johnson, who hopes to see his system in use in all manner of industries because air pollution is such a huge health risk.

Air pollution more deadly than traffic, smoking and diabetes
According to a recent report by the World Health Organization, indoor and outdoor air pollution causes 7 million premature deaths annually which is more than the combined effects of road deaths, smoking and diabetes. Pollution in air is linked to heart disease, cancer, asthma, allergy, lost productivity and irritation.

“I have always wanted to use chemistry to make the world a better place. I genuinely feel that GPAO will improve life for millions of people, especially those living in cities or near industrial producers” concludes Matthew Johnson.

Jeremy England

a

 

Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.

It could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,

http://www.businessinsider.com/groundbreaking-idea-of-lifes-origin-2014-12

http://www.englandlab.com

DNA Survives Re-Entry Into Earth’s Atmosphere

Functional Activity of Plasmid DNA after Entry into the Atmosphere of Earth Investigated by a New Biomarker Stability Assay for Ballistic Spaceflight Experiments

Paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0112979

Abstract

Sounding rockets represent an excellent platform for testing the influence of space conditions during the passage of Earth’s atmosphere and re-entry on biological, physical and chemical experiments for astrobiological purposes. We designed a robust functionality biomarker assay to analyze the biological effects of suborbital spaceflights prevailing during ballistic rocket flights. During the TEXUS-49 rocket mission in March 2011, artificial plasmid DNA carrying a fluorescent marker (enhanced green fluorescent protein: EGFP) and an antibiotic resistance cassette (kanamycin/neomycin) was attached on different positions of rocket exterior; (i) circular every 90 degree on the outer surface concentrical of the payload, (ii) in the grooves of screw heads located in between the surface application sites, and (iii) on the surface of the bottom side of the payload. Temperature measurements showed two major peaks at 118 and 130°C during the 780 seconds lasting flight on the inside of the recovery module, while outer gas temperatures of more than 1000°C were estimated on the sample application locations. Directly after retrieval and return transport of the payload, the plasmid DNA samples were recovered. Subsequent analyses showed that DNA could be recovered from all application sites with a maximum of 53% in the grooves of the screw heads. We could further show that up to 35% of DNA retained its full biological function, i.e., mediating antibiotic resistance in bacteria and fluorescent marker expression in eukariotic cells. These experiments show that our plasmid DNA biomarker assay is suitable to characterize the environmental conditions affecting DNA during an atmospheric transit and the re-entry and constitute the first report of the stability of DNA during hypervelocity atmospheric transit indicating that sounding rocket flights can be used to model the high-speed atmospheric entry of organics-laden artificial meteorites.

Experimental platforms for microbial survival under space conditions

Various experimental platforms were used so far to investigate this question (reviewed in [11]). These platforms were often located in low Earth orbit (LEO), e.g. in the early 1990s, the development of a facility, that was designed for short-term, up to two weeks, space exposure experiments for biological and material samples (BIOPAN), was initiated by the European Space Agency (ESA) [12]. This facility, which is conceived for multi-user purposes, was mounted on the external surface of the descent module of the Russian Foton satellite. Astrobiology, radiobiology, and material science experiments have been performed on BIOPAN analyzing the effects of solar UV radiation, space vacuum, extreme temperatures, and microgravity on biological samples [12]–[17]. Nowadays platforms with even longer exposure times exist like the multi-user facility, EXPOSE, which is located on the outer platforms of the International Space Station (ISS). Different test samples of microbial and eukaryotic origin were subjected to space vacuum for 1.5 years, solar electromagnetic radiation, cosmic radiation and simulated Martian surface conditions and after sample recovery and analysis survival of a fraction of the tested organisms could be shown [18]–[24].

A critical question is if microorganisms do not only survive the residence in space but if they would also be able to withstand the hostile conditions of entering a planets atmosphere when they are situated e.g. on a meteorite. Due to velocities of 10–20 km/s, the material is heated by friction and the surface melts. However, the atmosphere transit time is only in the range of a few seconds preventing that the heat is penetrating the rock more than a few centimeters. In fact temperatures of the interior material of some Martian meteorites stayed far below 100°C [25]. Many microorganisms can survive and still metabolize under these conditions, e.g. Bacillus subtilis spores can survive moist heat with temperatures of up to 100°C for 20 to 30 min whereas, in dry heat spores survival increases by a factor of 1000 [26]. Also, spores of B. subtilis strain WN511, embedded in rock samples, survived temporarily at 145°C during the re- entry phase of a ballistic flight with 1.3 km/s on a sounding rocket when not directly exposed to the harsh re-entry conditions directly at the leading edge [27]. However, in the two atmospheric entry simulation experiments STONE-5 and STONE-6, for which spores of B. subtilis and Ulocladium atrum and the cyanobacterium Chroococcidiopsis sp. were embedded in different rock samples mounted on the heat shield of the Foton-M2 capsule, no viable microorganism were recovered after the flight [28]–[30].

The observed microorganism die-off is among others due to damages on the DNA level. The high radiation in space leads to formation of thymine dimers as well as single and double strand breaks and activates the UV repair mechanisms inside the cell, which often can not cope with the extent of DNA damage (reviewed in [4], [24], [31]–[33]). Additionally, the extreme temperatures, especially the high temperatures during the entry into a planets atmosphere are detrimental for the complete organism if it is not well protected. Also on DNA level high temperatures show extreme effects. With increasing temperature the DNA double strand melts and depending on the sequence and nucleobase composition results at a specific melting temperature in two single strands [34]–[36]. With higher temperatures the DNA molecules start to degrade by heat-induced weakening of the phosphodiester bond and subsequent hydrolysis [37]–[38].

Sounding Rockets as research platforms for astrobiological experiments

Experimental time on the ISS and Foton satellites is difficult to aquire because of rare flight opportunities, a highly selective and time-consuming assortment of projects requiring the highest security standards at the same time. A suitable alternative to these long-term missions is the execution of experiments on sounding rockets [39]. Whenever microgravity and space-time in the range of minutes is sufficient these platforms are the means of choice. Sounding rocket programs on different launch vehicles (e.g. Mini-TEXUS, TEXUS and MAXUS) offering microgravity times from 3 to 13 minutes with guaranteed quality of 10−4 g reaching normally up to 10−6 g. During these missions altitude ranges from 150 up to 800 km are reached which allow to expose samples not only to microgravity but also to space conditions including, vacuum, radiation and extreme temperatures when mounted on the outside of the payload [39]–[41]. This provides an excellent test platform for astrobiology research especially for the analysis and characterisation of biomarkers. Samples can be easliy applied to the outside of the payload, sensors monitoring space conditions could be attached in the close vicinity of the samples, very short late access and early retrieval times in the range of a few hours can be achieved, safety requirements are less rigorous compared to manned missions and experiment equipment is less costly and the payload is reusable. Another advantage is the high launch frequency with which experiments can be performed and validated. Within the scope of the European space programs 10 to 15 sounding rocket missions are launched every year representing a potential high throughput testplatform for astrobiology research especially in the field of biomarker characterization.

 

Combustion experiments conducted in zero gravity

10338764_10152927396272518_5235630945412021081_n

Phenomenon-Fire-Unbound-631.jpg__800x600_q85_crop

Recent tests aboard the International Space Station have shown that fire in space can be less predictable and potentially more lethal than it is on Earth. “There have been experiments,” says NASA aerospace engineer Dan Dietrich, “where we observed fires that we didn’t think could exist, but did.”

But odd things happen in space, where gravity loses its grip on solids, liquids and gases. Without gravity, hot air expands but doesn’t move upward. The flame persists because of the diffusion of oxygen, with random oxygen molecules drifting into the fire. Absent the upward flow of hot air, fires in microgravity are dome-shaped or spherical—and sluggish, thanks to meager oxygen flow. “If you ignite a piece of paper in microgravity, the fire will just slowly creep along from one end to the other,” says Dietrich. “Astronauts are all very excited to do our experiments because space fires really do look quite alien.”
http://www.smithsonianmag.com/science-nature/in-space-flames-behave-in-ways-nobody-thought-possible-132637810/#ysY8yCgEczwH6x6A.99

Direct brain-to-brain connection has been established between humans for the second time

141105154507_1_540x360

Scientists in the US have successfully linked the brains of six people, allowing one person to control the hands of another using just their thoughts.

https://www.sciencedaily.com/releases/2014/11/141105154507.htm

Paper; http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0111332

Abstract

We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the “sender”) and transmits this information over the internet to the motor cortex region of a second subject (the “receiver”). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender’s signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means.

Time Lapse Sky Shows Earth Rotating Instead of Stars

The time-lapse starfield has been edited to show the rotation of the Earth from the point of view of the stars.

Here is a minor edit to the excellent video by Stephane Guisard and Jose Francisco Salgado, posted at Nicolas Bustos channel. Visithttp://eso.org for more about the European Southern Observatory.

Credits for original video:
ESO/José Francisco Salgado (http://www.josefrancisco.org)
ESO/S. Guisard (http://www.astrosurf.com/sguisard/)

At http://youtu.be/wFpeM3fxJoQ is the original video by Guisard and Salgado.

At http://astrosurf.com/sguisard/ and http://josefrancisco.org/ are more pictures from the European Southern Observatory’s Very Large Telescope (VLT) located in the Atacama Desert, Chile.

Music: “Arcadia” available at http://incompetech.com, copyright by Kevin Macleod.

If you like this, you may like at http://www.melissapaintsthings.com/ some intriguing and well-executed art.

 

Sarychev Volcano (Kuril Islands, northeast of Japan) in an early stage of eruption

ISS020-E-09048

A fortuitous orbit of the International Space Station allowed the astronauts this striking view of Sarychev Volcano (Kuril Islands, northeast of Japan) in an early stage of eruption on June 12, 2009. Sarychev Peak is one of the most active volcanoes in the Kuril Island chain, and it is located on the northwestern end of Matua Island. Prior to June 12, the last explosive eruption occurred in 1989, with eruptions in 1986, 1976, 1954, and 1946 also producing lava flows. Ash from the multi-day eruption has been detected 2,407 kilometers east-southeast and 926 kilometers west-northwest of the volcano, and commercial airline flights are being diverted away from the region to minimize the danger of engine failures from ash intake.
This detailed astronaut photograph is exciting to volcanologists because it captures several phenomena that occur during the earliest stages of an explosive volcanic eruption. The main column is one of a series of plumes that rose above Matua Island on June 12. The plume appears to be a combination of brown ash and white steam. The vigorously rising plume gives the steam a bubble-like appearance.**
In contrast, the smooth white cloud on top may be water condensation that resulted from rapid rising and cooling of the air mass above the ash column. This cloud, which meteorologists call a pileus cloud, is probably a transient feature: the eruption plume is starting to punch through. The structure also indicates that little to no shearing wind was present at the time to disrupt the plume. (Satellite images acquired 2-3 days after the start of activity illustrate the effect of shearing winds on the spread of the ash plumes across the Pacific Ocean.)
By contrast, a cloud of denser, gray ash—probably a pyroclastic flow—appears to be hugging the ground, descending from the volcano summit. The rising eruption plume casts a shadow to the northwest of the island (image top). Brown ash at a lower altitude of the atmosphere spreads out above the ground at image lower left. Low-level stratus clouds approach Matua Island from the east, wrapping around the lower slopes of the volcano. Only about 1.5 kilometers of the coastline of Matua Island (image lower center) are visible beneath the clouds and ash.

http://earthobservatory.nasa.gov/IOTD/view.php?id=38985

Close Menu