Episteme Uzay Gemisi Projesi & Moleküler Tarihçilik
E-Kitap Sunumu 2018 (ISBN: 9780463748275)
Foucault’nun 1980 yılında yazdığı “Güç ve Bilgi” kitabında “Episteme”, neyin bilimsel nitelikte sınıflandırılıp sınıflandırılamayacağının ayrımını olası kılan araçtır. Episteme Spacecraft Projesi’nde biyoloji bilimi hem bir temel, hem de bir değerdir. Amaç uzayda sürdürülebilir bir yaşamın üç temel problemi olan radyasyon, yer çekimi ve enerji sorunlarına bütüncül bir yaklaşım geliştirmek ve Dünya’daki sürdürülebilir hayatı uzayda ve başka gezegenlerde, aynı kalitede sürdürülebilir kılmaktır. Projede önerilen üç deney sayesinde, paydaşı simbiyoz (ortak yaşam) ve algoritması DNA olan bir simülasyonu mikrodan makroya kurmak mümkün olabilir.
Moleküler Tarihçilik DNA’yı, geçmişi, şimdiyi ve geleceği, yani tarihi olan, temel bir moleküler yapı taşı olarak kabul eder. DNA, canlılığı “gelecek hareket”, yani “can ve canlının hayatı” olarak büyük bir fiziksel verimlilikle yönlendirir. Aslında canlılık, evrendeki her bir atomun dönüşebileceği, dönüştürülebileceği ve varlığına hizmet edeceği bir potansiyeldir. Her canlıyı bir olay alanı olarak görebilir ve her canlının bireysel bir zaman ve yaşam gerçekliğinden bahsedebiliriz. Hatta atomun potansiyeli olarak “canlılık” düşüncesi ile evrensel bir canlılık zamanını düşünebiliriz. Yaşamın coşkun bir şekilde yer aldığı dünyamızı, Greenwich gibi bir sıfır meridyeni olarak alabilir ve canlı evrenin saatini buradan kurgulamaya başlayabiliriz. Hedef evrensel bir canlı tanımını karbon bazlı canlıdan öteye taşımak, canlının canlılığını her açıdan tespit edip, DNA kökenli anlayabilmektir. Cansız madde ve makinenin, canlılığa ve kalitesine etkileri açısından değerlendirilebilmesi de amaçlanmaktadır. Moleküler Tarihçilik evrensel bir canlılık zamanının felsefesi, Episteme Spacecraft Projesi ise bu canlılık alanının evrensel bir simbiyoz çemberi içinde, bilimsel olarak tespitinin bir yoludur.
“HUBERT AIRY first became aware of his affliction in the fall of 1854, when he noticed a small blind spot interfering with his ability to read. “At first it looked just like the spot which you see after having looked at the sun or some bright object,” he later wrote. But the blind spot was growing, its edges taking on a zigzag shape that reminded Airy of the bastions of a fortified medieval town. Only, they were gorgeously colored. And they were moving.
“All the interior of the fortification, so to speak, was boiling and rolling about in a most wonderful manner as if it was some thick liquid all alive,” Airy wrote. What happened next was less wonderful: a splitting headache, what we now call a migraine.”
What is the Earth Biogenome Project?
Powerful advances in genome sequencing technology, informatics, automation, and artificial intelligence, have propelled humankind to the threshold of a new beginning in understanding, utilizing, and conserving biodiversity. For the first time in history, it is possible to efficiently sequence the genomes of all known species, and to use genomics to help discover the remaining 80 to 90 percent of species that are currently hidden from science.
(Phys.org)—A team of researchers with the University of Côte d’Azur in France has found that drops ejected by an oscillating surface can at times travel faster than the surface that ejected them. In their paper published in the journal Physical Review Letters, the team describes experiments they conducted by flinging water from a superhydrophobic surface and what they found.
We investigate the behavior of droplets and soft elastic objects propelled with a catapult. Experiments show that the ejection velocity depends on both the projectile deformation and the catapult acceleration dynamics. With a subtle matching given by a peculiar value of the projectile/catapult frequency ratio, a 250% kinetic energy gain is obtained as compared to the propulsion of a rigid projectile with the same engine. This superpropulsion has strong potentialities: actuation of droplets, sorting of objects according to their elastic properties, and energy saving for propulsion engines.
Transit visibility zones of the Solar system planets
R. Wells K. Poppenhaeger C. A. Watson R. Heller
Monthly Notices of the Royal Astronomical Society, Volume 473, Issue 1, 1 January 2018, Pages 345–354, https://doi.org/10.1093/mnras/stx2077
Published: 14 August 2017 Article history
search filtersearch input
The detection of thousands of extrasolar planets by the transit method naturally raises the question of whether potential extrasolar observers could detect the transits of the Solar system planets. We present a comprehensive analysis of the regions in the sky from where transit events of the Solar system planets can be detected. We specify how many different Solar system planets can be observed from any given point in the sky, and find the maximum number to be three. We report the probabilities of a randomly positioned external observer to be able to observe single and multiple Solar system planet transits; specifically, we find a probability of 2.518 per cent to be able to observe at least one transiting planet, 0.229 per cent for at least two transiting planets, and 0.027 per cent for three transiting planets. We identify 68 known exoplanets that have a favourable geometric perspective to allow transit detections in the Solar system and we show how the ongoing K2 mission will extend this list. We use occurrence rates of exoplanets to estimate that there are 3.2 ± 1.2 and
temperate Earth-sized planets orbiting GK and M dwarf stars brighter than V = 13 and 16, respectively, that are located in the Earth’s transit zone.
Lex Fridman, MIT, shows how one day your (semi-)autonomous vehicle may ask you to take over if you’re distracted too much, e.g. by texting instead of driving.
Introducing our shared autonomy research vehicle with a demo of voice-based transfer of control based on whether the driver is paying attention to the road and on risk factors detected in the external environment. Our work aims to reimagine the self-driving car as a shared autonomy system built around the human being. Paper describing key ideas will be out shortly.
First Man to Walk on the Moon but rejected for a credit card in 1974
Along with the rejection letter, the Diners Club sent back the $15 check that Neil Armstrong had included with his application for the credit card.
Actual letter to be sold as part of an auction by the Neil Armstrong family . A batch of about 800 items will be sold on Nov. 1 and 2 in Dallas. As a preview, some will be on exhibit Oct. 1-5 2018 at Heritage Auctions in Manhattan.
Nicole Hone, an industrial designer based in Wellington, New Zealand.
Hydrophytes is a series of futuristic aquatic plants created with multi-material 3D printing. The project explores the design and choreography of movement to bring objects to life through 4D printing. The film is true to life with no effects added in post-processing.
“The team has created a first-of-its-kind algorithm that can interpret and accurately reproduce images seen or imagined by a person,” wrote Alexandru Micu in ZME Science.
Their paper, “Deep image reconstruction from human brain activity,” is on bioRxiv. The authors are Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani.
Vanessa Ramirez, associate editor of Singularity Hub, was one of several writers on tech watching sites who reported on the study. The writers noted that this would mark a difference from other research involved in deconstructing images based on pixels and basic shapes.
“Trying to tame a computer to decode mental images isn’t a new idea,” said Micu. “However, all previous systems have been limited in scope and ability. Some can only handle narrow domains like facial shape, while others can only rebuild images from preprogrammed images or categories.”
What is special here, Micu said, is that “their new algorithm can generate new, recognizable images from scratch.”
The study team has been exploring deep image reconstruction. Micu quoted the senior author of the study. “We believe that a deep neural network is good proxy for the brain’s hierarchical processing,” said Yukiyasu Kamitani.
Overview of deep image reconstruction is shown. The pixels’ values of the input image are optimized so that the DNN features of the image are similar to those decoded from fMRI activity. A deep generator network (DGN) is optionally combined with the DNN to produce natural-looking images, in which optimization is performed at the input space of the DGN. Credit: bioRxiv (2017). DOI: 10.1101/240317
Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.