Researchers in Japan are showing way to decode thoughts


 

“The team has created a first-of-its-kind algorithm that can interpret and accurately reproduce images seen or imagined by a person,” wrote Alexandru Micu in ZME Science.

Their paper, “Deep image reconstruction from human brain activity,” is on bioRxiv. The authors are Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani.

Vanessa Ramirez, associate editor of Singularity Hub, was one of several writers on tech watching sites who reported on the study. The writers noted that this would mark a difference from other research involved in deconstructing images based on pixels and basic shapes.

“Trying to tame a computer to decode mental images isn’t a new idea,” said Micu. “However, all previous systems have been limited in scope and ability. Some can only handle narrow domains like facial shape, while others can only rebuild images from preprogrammed images or categories.”

What is special here, Micu said, is that “their new algorithm can generate new, recognizable images from scratch.”

The study team has been exploring deep image reconstruction. Micu quoted the senior author of the study. “We believe that a deep neural network is good proxy for the brain’s hierarchical processing,” said Yukiyasu Kamitani.

Overview of deep image reconstruction is shown. The pixels’ values of the input image are optimized so that the DNN features of the image are similar to those decoded from fMRI activity. A deep generator network (DGN) is optionally combined with the DNN to produce natural-looking images, in which optimization is performed at the input space of the DGN. Credit: bioRxiv (2017). DOI: 10.1101/240317

Abstract

Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.

The collision model – Young proto-Earth, Theia & the Moon

7729_luna-tierra-theia-800x532

 

Editor’s summary

The Moon is thought to have formed mainly from material within a giant impactor that struck the proto-Earth, so it seems odd that the compositions of the Moon and Earth are so similar, given the differing composition of other Solar System bodies. Alessandra Mastrobuono-Battisti et al. track the feeding zones of growing planets in a suite of computational simulations of planetary accretion and find that different planets formed in the same simulation have distinct compositions, but the compositions of giant impactors are more similar to the planets they impact. A significant fraction of planet–impactor pairs have virtually identical compositions. The authors conclude that the similarity in composition between the Earth and Moon could be a natural consequence of a late giant impact.

Paper:

A primordial origin for the compositional similarity between the Earth and the Moon

Alessandra Mastrobuono-Battisti, Hagai B. Perets & Sean N. Raymond
AffiliationsContributionsCorresponding authors
Nature 520, 212–215 (09 April 2015) doi:10.1038/nature14333
Received 10 November 2014 Accepted 10 February 2015

http://www.nature.com/nature/journal/v520/n7546/abs/nature14333.html

Most of the properties of the Earth–Moon system can be explained
by a collision between a planetary embryo (giant impactor) and the
growing Earth late in the accretion process1–3. Simulations show that
most of the material that eventually aggregates to form the Moon
originates from theimpactor1,4,5. However, analysis of the terrestrial
and lunar isotopic compositions show them to be highly similar6–11.
In contrast, the compositions of other Solar System bodies are significantly
different from those of the Earth and Moon 12–14, suggesting
that different Solar System bodies have distinct compositions. This
challenges the giant impact scenario, because the Moon-forming
impactor must then also be thought to have a composition different
from that of the proto-Earth. Here we track the feeding zones of
growing planets in a suite of simulations of planetary accretion 15, to
measure the composition of Moon-forming impactors.We find that
different planets formed in the same simulation have distinct compositions,
but the compositions of giant impactors are statistically
more similar to the planets they impact. A large fraction of planet–
impactor pairs have almost identical compositions. Thus, the similarityin
composition between the Earth and Moon could be a natural
consequence of a late giant impact.

News:
Hit Me With Your Best Shot

Gravitational Lensing (Cosmic magnifying glass)

Ekran Alıntısı

News:

http://www.iflscience.com/space/gravitational-lens-allows-us-witness-supernova-repeat

Four versions of the same supernova explosion have been captured because a large galaxy between us and the event is distorting the path on which the light travels to reach us. The event not only makes visible a supernovae more distant than we normally see but provides the opportunity astronomers have been dreaming of to test three of the biggest questions in cosmology. Even more opportunities should arise in future.

One of the key predictions of General Relativity is that mass bends spacetime, and therefore light. Einstein predicted that very massive objects could focus light in a manner analogous with glass lenses, an effect finally observed in 1979.

Depending on the locations of the relevant objects we often see multiple images of the same distant quasar or galaxy. Since this light follows different paths to reach us the distance traveled on each will not be identical, so we are seeing some slightly delayed relative to the others. This makes little difference for an object whose brightness barely varies.

However, in 1964 Sjur Refsdal pointed out that different images of the same supernova would capture different moments in the explosion’s evolution, and might be used to test the rate at which the universe is expanding. Great efforts have been made to find such an example of such a valuable case. Dr Patrick Kelly of the University of California, Berkeley was looking for distant galaxies and came across the sight of four images of a nine billion year old supernova around a galaxy in the MACS J1149.6+2223 cluster.

Astronomers have glimpsed a far off and ancient star exploding, not once, but four times.

The exploding star, or supernova, was directly behind a cluster of huge galaxies, whose mass is so great that they warp space-time. This forms a cosmic magnifying glass that creates multiple images of the supernova, an effect first predicted by Albert Einstein’s General Theory of Relativity 100 years ago.

Dr Brad Tucker from The Australian National University (ANU) says it’s a dream discovery for the team.

“It’s perfectly set up, you couldn’t have designed a better experiment,” said Dr Tucker, from ANU Research School of Astronomy and Astrophysics.

“You can test some of the biggest questions about Einstein’s theory of relativity all at once – it kills three birds with one stone.”

Astronomers have mounted searches for such a cosmic arrangement over the past 20 years. However, this discovery was made during a separate search for distant galaxies by Dr Patrick Kelly from University of California, Berkeley.

“It really threw me for a loop when I spotted the four images surrounding the galaxy – it was a complete surprise,” he said.

The lucky discovery allows not only testing of the Theory of Relativity, but gives information about the strength of gravity, and the amount of dark matter and dark energy in the universe.

Because the gravitational effect of the intervening galaxy cluster magnifies the supernova that would normally be too distant to see, it provides a window into the deep past, Dr Tucker said.

“It’s a relic of a simpler time, when the universe was still slowing down and dark energy was not doing crazy stuff,” he said.

“We can use that to work out how dark matter and dark energy have messed up the universe.”

Paper:

http://science.sciencemag.org/content/347/6226/1123

Multiple images of a highly magnified supernova formed by an early-type cluster galaxy lens
BY PATRICK L. KELLY, STEVEN A. RODNEY, TOMMASO TREU, RYAN J. FOLEY, GABRIEL BRAMMER, KASPER B. SCHMIDT, ADI ZITRIN, ALESSANDRO SONNENFELD, LOUIS-GREGORY STROLGER, OR GRAUR, ALEXEI V. FILIPPENKO, SAURABH W. JHA, ADAM G. RIESS, MARUSA BRADAC, BENJAMIN J. WEINER, DANIEL SCOLNIC, MATTHEW A. MALKAN, ANJA VON DER LINDEN, MICHELE TRENTI, JENS HJORTH, RAPHAEL GAVAZZI, ADRIANO FONTANA, JULIAN C. MERTEN, CURTIS MCCULLY, TUCKER JONES, MARC POSTMAN, ALAN DRESSLER, BRANDON PATEL, S. BRADLEY CENKO, MELISSA L. GRAHAM, BRADLEY E. TUCKER
SCIENCE06 MAR 2015 : 1123-1126
Light from a distant supernova at z = 1.491 is detected in four images after being deflected en route by gravitational forces.

Abstract

In 1964, Refsdal hypothesized that a supernova whose light traversed multiple paths around a strong gravitational lens could be used to measure the rate of cosmic expansion. We report the discovery of such a system. In Hubble Space Telescope imaging, we have found four images of a single supernova forming an Einstein cross configuration around a redshift z = 0.54 elliptical galaxy in the MACS J1149.6+2223 cluster. The cluster’s gravitational potential also creates multiple images of the z = 1.49 spiral supernova host galaxy, and a future appearance of the supernova elsewhere in the cluster field is expected. The magnifications and staggered arrivals of the supernova images probe the cosmic expansion rate, as well as the distribution of matter in the galaxy and cluster lenses.

Cryogenic clocks will stay accurate for 16 BILLION years

cryo-clocks_1024

Paper:

http://www.nature.com/nphoton/journal/v9/n3/full/nphoton.2015.5.html

Cryogenic optical lattice clocks

Ichiro Ushijima, Masao Takamoto, Manoj Das, Takuya Ohkubo & Hidetoshi Katori

Nature Photonics 9, 185–189 (2015) doi:10.1038/nphoton.2015.5
Received 13 May 2014 Accepted 06 January 2015 Published online 09 February 2015

The accuracy of atomic clocks relies on the superb reproducibility of atomic spectroscopy, which is accomplished by careful control and the elimination of environmental perturbations on atoms. To date, individual atomic clocks have achieved a 10−18 level of total uncertainties1, 2, but a two-clock comparison at the 10−18 level has yet to be demonstrated. Here, we demonstrate optical lattice clocks with 87Sr atoms interrogated in a cryogenic environment to address the blackbody radiation-induced frequency shift3, which remains the primary source of systematic uncertainty2, 4, 5, 6 and has initiated vigorous theoretical7, 8 and experimental9, 10 investigations. The systematic uncertainty for the cryogenic clock is evaluated to be 7.2 × 10−18, which is expedited by operating two such cryo-clocks synchronously11, 12. After 11 measurements performed over a month, statistical agreement between the two cryo-clocks reached 2.0 × 10−18. Such clocks’ reproducibility is a major step towards developing accurate clocks at the low 10−18 level, and is directly applicable as a means for relativistic geodesy13.

News:

New ‘Cryogenic’ Clock Developed In Japan Accurate For 16 Billion Years

These new cryogenic clocks will stay accurate for 16 BILLION years

70,000 years ago a Star passed through our Solar System

_81084773_81084664

News:

An alien star passed through our Solar System just 70,000 years ago, astronomers have discovered.

http://www.bbc.com/news/science-environment-31519875

Paper:

https://iopscience.iop.org/article/10.1088/2041-8205/800/1/L17;jsessionid=71643332CDF2C8FACA84F1FA463B8C18.c4.iopscience.cld.iop.org#

THE CLOSEST KNOWN FLYBY OF A STAR TO THE SOLAR SYSTEM

Eric E. Mamajek1, Scott A. Barenfeld2, Valentin D. Ivanov3,4, Alexei Y. Kniazev5,6,7, Petri Väisänen5,6,Yuri Beletsky8, and Henri M. J. Boffin3

Published 2015 February 12© 2015. The American Astronomical Society. All rights reserved.
The Astrophysical Journal Letters, Volume 800, Number 1

Abstract

Passing stars can perturb the Oort Cloud, triggering comet showers and potentially extinction events on Earth. We combine velocity measurements for the recently discovered, nearby, low-mass binary system WISE J072003.20-084651.2 (“Scholz’s star”) to calculate its past trajectory. Integrating the Galactic orbits of this ~0.15 M binary system and the Sun, we find that the binary passed within only 52+23−14 kAU (0.25+0.11−0.07 pc) of the Sun 70+15−10 kya (1σuncertainties), i.e., within the outer Oort Cloud. This is the closest known encounter of a star to our solar system with a well-constrained distance and velocity. Previous work suggests that flybys within 0.25 pc occur infrequently (~0.1 Myr−1). We show that given the low mass and high velocity of the binary system, the encounter was dynamically weak. Using the best available astrometry, our simulations suggest that the probability that the star penetrated the outer Oort Cloud is ~98%, but the probability of penetrating the dynamically active inner Oort Cloud (<20 kAU) is ~10−4. While the flyby of this system likely caused negligible impact on the flux of long-period comets, the recent discovery of this binary highlights that dynamically important Oort Cloud perturbers may be lurking among nearby stars.

1. INTRODUCTION

Perturbations by passing stars on Oort cloud comets have previously been proposed as the source of long-period comets visiting the planetary region of the solar system (Oort 1950; Biermann et al. 1983; Weissman 1996; Rickman 2014), and possibly for generating Earth-crossing comets that produce biological extinction events (Davis et al. 1984). Approximately 30%, of craters with diameters <10 km on the Earth and Moon are likely due to long-period comets from the Oort Cloud (Weissman 1996). Periodic increases in the flux of Oort cloud comets due to a hypothetical substellar companion have been proposed (Whitmire & Jackson 1984); however, recent time series analysis of terrestrial impact craters are inconsistent with periodic variations (Bailer-Jones 2011), and sensitive infrared sky surveys have yielded no evidence for any wide-separation substellar companion (Luhman 2014). A survey of nearby field stars with Hipparcosastrometric data (Perryman et al. 1997) by García-Sánchez et al. (1999) identified only a single candidate with a pass of within 0.9 pc of the Sun (Gl 710; 1.4 Myr in the future at ~0.34 pc); however, it is predicted that ~12 stars pass within 1 pc of the Sun every Myr (García-Sánchez et al. 2001). A recent analysis by Bailer-Jones (2014) of the orbits of ~50,000 stars using the revisedHipparcos astrometry from van Leeuwen (2007), identified four Hipparcos stars whose future flybys may bring them within 0.5 pc of the Sun (however, the closest candidate HIP 85605 has large astrometric uncertainties; see discussion in Section 3).

A low-mass star in the solar vicinity in Monoceros, WISE J072003.20-084651.2 (hereafter W0720 or “Scholz’s star”) was recently discovered with a photometric distance of ~7 pc and initial spectral classification of M9 ± 1 (Scholz 2014). This nearby star likely remained undiscovered for so long due to its combination of proximity to the Galactic plane (b = +2fdg3), optical dimness (V = 18.3 mag), and low proper motion (~0farcs1 yr−1). The combination of proximity and low tangential velocity for W0720 (Vtan sime 3 km s−1) initially drew our attention to this system. If most of the star’s motion was radial, it was possible that the star may have a past or future close pass to the Sun. Indeed, Burgasser et al. (2014) and Ivanov et al. (2014) have recently reported a high positive radial velocity. Burgasser et al. (2014) resolved W0720 as a M9.5+T5 binary and provided a trigonometric parllax distance of 6.0+1.2−0.9 pc. Here we investigate the trajectory of the W0720 system with respect to the solar system, and demonstrate that the star recently (~70,000 years ago) passed through the Oort Cloud.

A Medieval Remedy that Kills MRSA (Methicillin Resistant Staphylococcus Aureus)

_82006236_bald'sleechbookfolio12b

News;

http://www.bbc.com/news/uk-england-nottinghamshire-32117815

https://www.nottingham.ac.uk/news/pressreleases/2015/march/ancientbiotics—a-medieval-remedy-for-modern-day-superbugs.aspx

Paper;

http://mbio.asm.org/content/6/4/e01129-15.full

A 1,000-Year-Old Antimicrobial Remedy with Antistaphylococcal Activity

Freya Harrison,a Aled E. L. Roberts,a Rebecca Gabrilska,b Kendra P. Rumbaugh,b Christina Lee,c Stephen P.

Digglea Centre for Biomolecular Sciences, School of Life Sciences, University of Nottingham, University Park, Nottingham, United Kingdom a ; Department of Surgery, Texas Tech University Health Sciences Center, School of Medicine, Lubbock, Texas, USA b; School of English and Centre for the Study of the Viking Age, University of Nottingham, University Park, Nottingham, United Kingdom c

ABSTRACT

Plant-derived compounds and other natural substances are a rich potential source of compounds that kill or attenuate pathogens that are resistant to current antibiotics. Medieval societies used a range of these natural substances to treat conditions clearly recognizable to the modern eye as microbial infections, and there has been much debate over the likely efficacy of these treatments. Our interdisciplinary team, comprising researchers from both sciences and humanities, identified and reconstructed a potential remedy for Staphylococcus aureus infection from a 10th century Anglo-Saxon leechbook. The remedy repeatedly killed established S. aureus biofilms in an in vitro model of soft tissue infection and killed methicillin-resistant S. aureus (MRSA) in a mouse chronic wound model. While the remedy contained several ingredients that are individually known to have some antibacterial activity, full efficacy required the combined action of several ingredients, highlighting the scholarship of premodern doctors and the potential of ancient texts as a source of new antimicrobial agents. IMPORTANCE While the antibiotic potential of some materials used in historical medicine has been demonstrated, empirical tests of entire remedies are scarce. This is an important omission, because the efficacy of “ancientbiotics” could rely on the combined activity of their various ingredients. This would lead us to underestimate their efficacy and, by extension, the scholarship of premodern doctors. It could also help us to understand why some natural compounds that show antibacterial promise in the laboratory fail to yield positive results in clinical trials. We have reconstructed a 1,000-year-old remedy which kills the bacteria it was designed to treat and have shown that this activity relies on the combined activity of several antimicrobial ingredients. Our results highlight (i) the scholarship and rational methodology of premodern medical professionals and (ii) the untapped potential of premodern remedies for yielding novel therapeutics at a time when new antibiotics are desperately needed.

Half of all DNA present on the NYC subway’s surfaces matches no known organism

1111

 

The Paper;

Geospatial Resolution of Human and Bacterial Diversity with City-Scale Metagenomics

http://www.cell.com/pb/assets/raw/journals/research/cell-systems/do-not-delete/CELS1_FINAL.pdf

In Brief
Afshinnekoo et al. describe a city-scale molecular profile of DNA collected from a city’s subway system, public surfaces, and one waterway. These data enable a baseline analysis of bacterial, eukaryotic, and aracheal organisms in the built environment of mass transit and urban
life.

Highlights

  • Almost half of all DNA present on the subway’s surfaces matches no known organism.
  • Hundreds of species of bacteria are in the subway, mostly harmless. More riders bring more diversity.
  • One station flooded during Hurricane Sandy still resembles a marine environment.
  • Human allele frequencies in DNA on surfaces can mirror US Census data.

SUMMARY

The panoply of microorganisms and other species present in our environment influence human health and disease, especially in cities, but have not been profiled with metagenomics at a city-wide scale. We sequenced DNA from surfaces across the entire New York City (NYC) subway system, the Gowanus Canal, and public parks. Nearly half of the DNA (48%) does not match any known organism; identified organisms spanned 1,688 bacterial, viral, archaeal, and eukaryotic taxa, which were enriched for harmless genera associated with skin (e.g.,Acinetobacter). Predicted ancestry of human DNA left on subway surfaces can recapitulate U.S. Census demographic data, and bacterial signatures can reveal a station’s history, such as marine-associated bacteria in a hurricane-flooded station. Some evidence of pathogens was found (Bacillus anthracis), but a lack of reported cases in NYC suggests that the pathogens represent a normal, urban microbiome. This baseline metagenomic map of NYC could help long-term disease surveillance, bioterrorism threat mitigation, and health management in the built environment of cities.

The News:

Researchers at Weill Cornell Medical College released a study on Thursday that mapped DNA found in New York’s subway system — a crowded, largely subterranean behemoth that carries 5.5 million riders on an average weekday, and is filled with hundreds of species of bacteria (mostly harmless), the occasional spot of bubonic plague, and a universe of enigmas. Almost half of the DNA found on the system’s surfaces did not match any known organism and just 0.2 percent matched the human genome.

“People don’t look at a subway pole and think, ‘It’s teeming with life,’ ” said Dr. Christopher E. Mason, a geneticist at Weill Cornell Medical College and the lead author of the study. “After this study, they may. But I want them to think of it the same way you’d look at a rain forest, and be almost in awe and wonder, effectively, that there are all these species present — and that you’ve been healthy all along.”

Robots learn to use kitchen tools by watching YouTube videos

150113090600_1_540x360

Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) partnered with a scientist at the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) to develop robotic systems that are able to teach themselves. Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can “think” for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.

https://www.sciencedaily.com/releases/2015/01/150113090600.htm

Paper; http://www.umiacs.umd.edu/~yzyang/paper/YouCookMani_CameraReady.pdf

Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web

Yezhou Yang University of Maryland yzyang@cs.umd.edu

Yi Li NICTA, Australia yi.li@nicta.com.au

Cornelia Fermuller ¨ University of Maryland fer@umiacs.umd.edu

Yiannis Aloimonos University of Maryland yiannis@cs.umd.edu

Abstract

In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.

Introduction

The ability to learn actions from human demonstrations is one of the major challenges for the development of intelligent systems. Particularly, manipulation actions are very challenging, as there is large variation in the way they can be performed and there are many occlusions. Our ultimate goal is to build a self-learning robot that is able to enrich its knowledge about fine grained manipulation actions by “watching” demo videos. In this work we explicitly model actions that involve different kinds of grasping, and aim at generating a sequence of atomic commands by processing unconstrained videos from the World Wide Web (WWW). The robotics community has been studying perception and control problems of grasping for decades (Shimoga 1996). Recently, several learning based systems were reported that infer contact points or how to grasp an object from its appearance (Saxena, Driemeyer, and Ng 2008; Lenz, Lee, and Saxena 2014). However, the desired grasping type could be different for the same target object, when used for different action goals. Traditionally, data about the grasp has been acquired using motion capture gloves or hand trackers, such as the model-based tracker of (Oikonomidis, Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Kyriazis, and Argyros 2011). The acquisition of grasp information from video (without 3D information) is still considered very difficult because of the large variation in appearance and the occlusions of the hand from objects during manipulation. Our premise is that actions of manipulation are represented at multiple levels of abstraction. At lower levels the symbolic quantities are grounded in perception, and at the high level a grammatical structure represents symbolic information (objects, grasping types, actions). With the recent development of deep neural network approaches, our system integrates a CNN based object recognition and a CNN based grasping type recognition module. The latter recognizes the subject’s grasping type directly from image patches. The grasp type is an essential component in the characterization of manipulation actions. Just from the viewpoint of processing videos, the grasp contains information about the action itself, and it can be used for prediction or as a feature for recognition. It also contains information about the beginning and end of action segments, thus it can be used to segment videos in time. If we are to perform the action with a robot, knowledge about how to grasp the object is necessary so the robot can arrange its effectors. For example, consider a humanoid with one parallel gripper and one vacuum gripper. When a power grasp is desired, the robot should select the vacuum gripper for a stable grasp, but when a precision grasp is desired, the parallel gripper is a better choice. Thus, knowing the grasping type provides information for the robot to plan the configuration of its effectors, or even the type of effector to use. In order to perform a manipulation action, the robot also needs to learn what tool to grasp and on what object to perform the action. Our system applies CNN based recognition modules to recognize the objects and tools in the video. Then, given the beliefs of the tool and object (from the output of the recognition), our system predicts the most likely action using language, by mining a large corpus using a technique similar to (Yang et al. 2011). Putting everything together, the output from the lower level visual perception system is in the form of (LeftHand GraspType1 Object1 Action RightHand GraspType2 Object2). We will refer to this septet of quantities as visual sentence. At the higher level of representation, we generate a symbolic command sequence. (Yang et al. 2014) proposed a context-free grammar and related operations to parse manipulation actions. However, their system only processed RGBD data from a controlled lab environment. Furthermore, they did not consider the grasping type in the grammar. This work extends (Yang et al. 2014) by modeling manipulation actions using a probabilistic variant of the context free grammar, and explicitly modeling the grasping type. Using as input the belief distributions from the CNN based visual perception system, a Viterbi probabilistic parser is used to represent actions in form of a hierarchical and recursive tree structure. This structure innately encodes the order of atomic actions in a sequence, and forms the basic unit of our knowledge representation. By reverse parsing it, our system is able to generate a sequence of atomic commands in predicate form, i.e. as Action(Subject, P atient) plus the temporal information necessary to guide the robot. This information can then be used to control the robot effectors (Argall et al. 2009). Our contributions are twofold. (1) A convolutional neural network (CNN) based method has been adopted to achieve state-of-the-art performance in grasping type classification and object recognition on unconstrained video data; (2) a system for learning information about human manipulation action has been developed that links lower level visual perception and higher level semantic structures through a probabilistic manipulation action grammar.

 

DNA Survives Re-Entry Into Earth’s Atmosphere

Functional Activity of Plasmid DNA after Entry into the Atmosphere of Earth Investigated by a New Biomarker Stability Assay for Ballistic Spaceflight Experiments

Paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0112979

Abstract

Sounding rockets represent an excellent platform for testing the influence of space conditions during the passage of Earth’s atmosphere and re-entry on biological, physical and chemical experiments for astrobiological purposes. We designed a robust functionality biomarker assay to analyze the biological effects of suborbital spaceflights prevailing during ballistic rocket flights. During the TEXUS-49 rocket mission in March 2011, artificial plasmid DNA carrying a fluorescent marker (enhanced green fluorescent protein: EGFP) and an antibiotic resistance cassette (kanamycin/neomycin) was attached on different positions of rocket exterior; (i) circular every 90 degree on the outer surface concentrical of the payload, (ii) in the grooves of screw heads located in between the surface application sites, and (iii) on the surface of the bottom side of the payload. Temperature measurements showed two major peaks at 118 and 130°C during the 780 seconds lasting flight on the inside of the recovery module, while outer gas temperatures of more than 1000°C were estimated on the sample application locations. Directly after retrieval and return transport of the payload, the plasmid DNA samples were recovered. Subsequent analyses showed that DNA could be recovered from all application sites with a maximum of 53% in the grooves of the screw heads. We could further show that up to 35% of DNA retained its full biological function, i.e., mediating antibiotic resistance in bacteria and fluorescent marker expression in eukariotic cells. These experiments show that our plasmid DNA biomarker assay is suitable to characterize the environmental conditions affecting DNA during an atmospheric transit and the re-entry and constitute the first report of the stability of DNA during hypervelocity atmospheric transit indicating that sounding rocket flights can be used to model the high-speed atmospheric entry of organics-laden artificial meteorites.

Experimental platforms for microbial survival under space conditions

Various experimental platforms were used so far to investigate this question (reviewed in [11]). These platforms were often located in low Earth orbit (LEO), e.g. in the early 1990s, the development of a facility, that was designed for short-term, up to two weeks, space exposure experiments for biological and material samples (BIOPAN), was initiated by the European Space Agency (ESA) [12]. This facility, which is conceived for multi-user purposes, was mounted on the external surface of the descent module of the Russian Foton satellite. Astrobiology, radiobiology, and material science experiments have been performed on BIOPAN analyzing the effects of solar UV radiation, space vacuum, extreme temperatures, and microgravity on biological samples [12]–[17]. Nowadays platforms with even longer exposure times exist like the multi-user facility, EXPOSE, which is located on the outer platforms of the International Space Station (ISS). Different test samples of microbial and eukaryotic origin were subjected to space vacuum for 1.5 years, solar electromagnetic radiation, cosmic radiation and simulated Martian surface conditions and after sample recovery and analysis survival of a fraction of the tested organisms could be shown [18]–[24].

A critical question is if microorganisms do not only survive the residence in space but if they would also be able to withstand the hostile conditions of entering a planets atmosphere when they are situated e.g. on a meteorite. Due to velocities of 10–20 km/s, the material is heated by friction and the surface melts. However, the atmosphere transit time is only in the range of a few seconds preventing that the heat is penetrating the rock more than a few centimeters. In fact temperatures of the interior material of some Martian meteorites stayed far below 100°C [25]. Many microorganisms can survive and still metabolize under these conditions, e.g. Bacillus subtilis spores can survive moist heat with temperatures of up to 100°C for 20 to 30 min whereas, in dry heat spores survival increases by a factor of 1000 [26]. Also, spores of B. subtilis strain WN511, embedded in rock samples, survived temporarily at 145°C during the re- entry phase of a ballistic flight with 1.3 km/s on a sounding rocket when not directly exposed to the harsh re-entry conditions directly at the leading edge [27]. However, in the two atmospheric entry simulation experiments STONE-5 and STONE-6, for which spores of B. subtilis and Ulocladium atrum and the cyanobacterium Chroococcidiopsis sp. were embedded in different rock samples mounted on the heat shield of the Foton-M2 capsule, no viable microorganism were recovered after the flight [28]–[30].

The observed microorganism die-off is among others due to damages on the DNA level. The high radiation in space leads to formation of thymine dimers as well as single and double strand breaks and activates the UV repair mechanisms inside the cell, which often can not cope with the extent of DNA damage (reviewed in [4], [24], [31]–[33]). Additionally, the extreme temperatures, especially the high temperatures during the entry into a planets atmosphere are detrimental for the complete organism if it is not well protected. Also on DNA level high temperatures show extreme effects. With increasing temperature the DNA double strand melts and depending on the sequence and nucleobase composition results at a specific melting temperature in two single strands [34]–[36]. With higher temperatures the DNA molecules start to degrade by heat-induced weakening of the phosphodiester bond and subsequent hydrolysis [37]–[38].

Sounding Rockets as research platforms for astrobiological experiments

Experimental time on the ISS and Foton satellites is difficult to aquire because of rare flight opportunities, a highly selective and time-consuming assortment of projects requiring the highest security standards at the same time. A suitable alternative to these long-term missions is the execution of experiments on sounding rockets [39]. Whenever microgravity and space-time in the range of minutes is sufficient these platforms are the means of choice. Sounding rocket programs on different launch vehicles (e.g. Mini-TEXUS, TEXUS and MAXUS) offering microgravity times from 3 to 13 minutes with guaranteed quality of 10−4 g reaching normally up to 10−6 g. During these missions altitude ranges from 150 up to 800 km are reached which allow to expose samples not only to microgravity but also to space conditions including, vacuum, radiation and extreme temperatures when mounted on the outside of the payload [39]–[41]. This provides an excellent test platform for astrobiology research especially for the analysis and characterisation of biomarkers. Samples can be easliy applied to the outside of the payload, sensors monitoring space conditions could be attached in the close vicinity of the samples, very short late access and early retrieval times in the range of a few hours can be achieved, safety requirements are less rigorous compared to manned missions and experiment equipment is less costly and the payload is reusable. Another advantage is the high launch frequency with which experiments can be performed and validated. Within the scope of the European space programs 10 to 15 sounding rocket missions are launched every year representing a potential high throughput testplatform for astrobiology research especially in the field of biomarker characterization.

 

Direct brain-to-brain connection has been established between humans for the second time

141105154507_1_540x360

Scientists in the US have successfully linked the brains of six people, allowing one person to control the hands of another using just their thoughts.

https://www.sciencedaily.com/releases/2014/11/141105154507.htm

Paper; http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0111332

Abstract

We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the “sender”) and transmits this information over the internet to the motor cortex region of a second subject (the “receiver”). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender’s signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means.