The Ethiopian government is calling for the restitution of a sacred object that is sealed inside an altar in London’s Westminster Abbey. The object, known as a tabot, is a tablet that symbolically represents the Ark of the Covenant and the Ten Commandments. Every Ethiopian church houses a covered tabot, which is regarded as sacrosanct and must be seen only by the priest.
Westminster Abbey’s tabot was looted at the battle of Maqdala (formerly Magdala) in 1868, when British troops attacked the forces of the Ethiopian emperor Tewodros.
Last month(2018), following publicity surrounding the opening of the abbey’s museum, the Ethiopian ambassador in London reiterated his government’s claim. Hailemichael Aberra Afework told The Art Newspaper: “We are urging all those who hold items looted from Ethiopia to return them. This includes the tabot held at Westminster Abbey.” It would be inappropriate for a tabot to be displayed in a museum, so he believes that it should be returned to a church, with the Ethiopian synod permitted to decide which one.
Son Calisma.Rhapsodos Mozaik’te ” Kendine ait bir oda girişinde iki cumle ile yerini aldi; Yapamadım, edemedim, gidemedim derken ömür bitti.” ve digeri ” Bir düşler kıyımıdır yasam; çiğnenmiş, ihanete uğramış, satılmış, bırakılmıs, unutulmuş bir düşler mezarlığıdır…Ne israf ( Krala Veda, Pierre Schoendoerffer, Özdemir Ince cevirisi)
Umudunu, pesinde kosacağı düşleri Sanat ve Edebiyat ile besleyen ve vazgeçmeyen herkesle ” Rhapsodos Mozaik‘te ” görüşmek dileği ile…
In January 2017, Śri Vāsudevānada Saraswati, speaking about the spiritual significance of mother, referred to Mother Madālasā singing to her children, telling them that they are śudda, buddha and mukta, pure, conscious and free.
The song is part of the Madālasā Upadesa, or teachings of Madālasā. There are 8 verses of beautiful Sanskrit.
Gabriella Burnel was commissioned to set to music and perform the verses of the Song. The commission was funded by friends and students of the School of Practical Philosophy in Australia. The film is a video-recording of the final audio session in the London studios, generously provided by Gabriella.
Lex Fridman, MIT, shows how one day your (semi-)autonomous vehicle may ask you to take over if you’re distracted too much, e.g. by texting instead of driving.
Introducing our shared autonomy research vehicle with a demo of voice-based transfer of control based on whether the driver is paying attention to the road and on risk factors detected in the external environment. Our work aims to reimagine the self-driving car as a shared autonomy system built around the human being. Paper describing key ideas will be out shortly.
First Man to Walk on the Moon but rejected for a credit card in 1974
Along with the rejection letter, the Diners Club sent back the $15 check that Neil Armstrong had included with his application for the credit card.
Actual letter to be sold as part of an auction by the Neil Armstrong family . A batch of about 800 items will be sold on Nov. 1 and 2 in Dallas. As a preview, some will be on exhibit Oct. 1-5 2018 at Heritage Auctions in Manhattan.
Nicole Hone, an industrial designer based in Wellington, New Zealand.
Hydrophytes is a series of futuristic aquatic plants created with multi-material 3D printing. The project explores the design and choreography of movement to bring objects to life through 4D printing. The film is true to life with no effects added in post-processing.
“The team has created a first-of-its-kind algorithm that can interpret and accurately reproduce images seen or imagined by a person,” wrote Alexandru Micu in ZME Science.
Their paper, “Deep image reconstruction from human brain activity,” is on bioRxiv. The authors are Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani.
Vanessa Ramirez, associate editor of Singularity Hub, was one of several writers on tech watching sites who reported on the study. The writers noted that this would mark a difference from other research involved in deconstructing images based on pixels and basic shapes.
“Trying to tame a computer to decode mental images isn’t a new idea,” said Micu. “However, all previous systems have been limited in scope and ability. Some can only handle narrow domains like facial shape, while others can only rebuild images from preprogrammed images or categories.”
What is special here, Micu said, is that “their new algorithm can generate new, recognizable images from scratch.”
The study team has been exploring deep image reconstruction. Micu quoted the senior author of the study. “We believe that a deep neural network is good proxy for the brain’s hierarchical processing,” said Yukiyasu Kamitani.
Overview of deep image reconstruction is shown. The pixels’ values of the input image are optimized so that the DNN features of the image are similar to those decoded from fMRI activity. A deep generator network (DGN) is optionally combined with the DNN to produce natural-looking images, in which optimization is performed at the input space of the DGN. Credit: bioRxiv (2017). DOI: 10.1101/240317
Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.