Archive for the ‘Επιστήμη & Τεχνολογία’ Category
Posted: 10 Nov 2013 03:04 AM PST
Του Νίκου Λυγερού
Με τα πλέον επίσημα σεισμικά δεδομένα της εταιρείας PGS αλλά και του ΥΠΕΚΑ περνάμε σε μία άλλη φάση για το θέμα της αξιοποίησης της ελληνικής ΑΟΖ. Τώρα είναι το επιστημονικό και το τεχνολογικό που συμβαδίζουν πρακτικά και πιέζουν πάνω στο πολιτικό σκέλος. Δεν υπάρχει πια καμία αμφισβήτηση των εκτιμήσεων, αφού τώρα έχουμε δεδομένα. Κατά συνέπεια, πολλοί αλλάζουν πορεία και ενισχύουν πια το πλαίσιο της ΑΟΖ. Επιπλέον, καθώς τα δεδομένα αφορούν το Ιόνιο και Νότια της Κρήτης σχετίζονται άμεσα με το θέμα των οριοθετήσεων της ελληνικής ΑΟΖ με την Αλβανία, την Ιταλία και τη Λιβύη. Κατά συνέπεια, πρέπει να υπάρξει κι ένας συντονισμός κινήσεων, αφού για να βγουν τα θαλάσσια οικόπεδα πρέπει, αν δεν υπάρχει ακόμα υπογραφή, το πλαίσιο να είναι προετοιμασμένο, όπως έγινε και με τα θαλάσσια οικόπεδα της κυπριακής ΑΟΖ πριν γίνει η επίσημη συμφωνία με το Ισραήλ το 2010. Σε αυτό το πλαίσιο πρέπει να κινηθούμε αποτελεσματικά δίχως τα εμπόδια της γραφειοκρατίας, για να είμαστε έτοιμοι για τις υποψηφιότητες των εταιρειών με την προεδρία της Ευρωπαϊκής Ένωσης. Σε επίπεδο στρατηγικής υπάρχει ένα πολύ καλό timing μέσω του ευρωπαϊκού πεδίου τόσο για την Ελλάδα όσο και για την Ιταλία. Και αυτό σχετίζεται άμεσα και με το θέμα της Αλβανίας και του αγωγού TAP. Το πρέπον λοιπόν είναι το στρατηγικό μείγμα, που οργανώνεται για αυτό το νέο άνοιγμα για την πατρίδα μας. Κάθε κίνησή μας τώρα πρέπει να γίνει μία πράξη που δημιουργεί μέλλον και διαφοροποιείται από το παρελθόν, γιατί πρόκειται για μια αλλαγή φάσης κι όχι απλώς κινήσεις εντυπώσεων, οι οποίες μας αφήνουν αδιάφορες. Διότι η Ελλάδα χρειάζεται πραγματικά αυτήν την ελληνική ΑΟΖ, για να την αξιοποιήσει, για να ξεπεράσει τα οικονομικά εμπόδια και να γίνει ένας αξιόπιστος γεωπολιτικός παίκτης σε ευρωπαϊκό αλλά και διεθνές επίπεδο μέσω του τομέα της ενέργειας. Για αυτή τη φάση όμως πρέπει να προετοιμαστεί και η παιδεία μας και σε επιστημονικό επίπεδο και στη διαχείριση αλλά και στον στρατηγικό σχεδιασμό από το εθνικό έως την περιφέρεια, αφού υπάρχει άμεση σχέση με τον Νόμο περί υδρογονανθράκων, που καθορίζει το πεδίο δράσης μας.
(το είδα στο Greek Surnames)
Ένα υπέροχο άρθρο από το:
(μέσω του ιστολογίου Tracing Knowelege)
Σχόλιο Κ.Κ. Για να μπορέσετε να παρακολουθήσετε καλύτερα τις αγωνιστικές κινητοποιήσεις του λαού μας εναντίον του εαυτού του._Κ.Κ.
Many researchers believe that physics will not be complete until it can explain not just the behaviour of space and time, but where these entities come from.
28 August 2013 |της Zeeya Merali
“Imagine waking up one day and realizing that you actually live inside a computer game,” says Mark Van Raamsdonk, describing what sounds like a pitch for a science-fiction film. But for Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, Canada, this scenario is a way to think about reality. If it is true, he says, “everything around us — the whole three-dimensional physical world — is an illusion born from information encoded elsewhere, on a two-dimensional chip”. That would make our Universe, with its three spatial dimensions, a kind of hologram, projected from a substrate that exists only in lower dimensions.
This ‘holographic principle’ is strange even by the usual standards of theoretical physics. But Van Raamsdonk is one of a small band of researchers who think that the usual ideas are not yet strange enough. If nothing else, they say, neither of the two great pillars of modern physics — general relativity, which describes gravity as a curvature of space and time, and quantum mechanics, which governs the atomic realm — gives any account for the existence of space and time. Neither does string theory, which describes elementary threads of energy.
Van Raamsdonk and his colleagues are convinced that physics will not be complete until it can explain how space and time emerge from something more fundamental — a project that will require concepts at least as audacious as holography. They argue that such a radical reconceptualization of reality is the only way to explain what happens when the infinitely dense ‘singularity’ at the core of a black hole distorts the fabric of space-time beyond all recognition, or how researchers can unify atomic-level quantum theory and planet-level general relativity — a project that has resisted theorists’ efforts for generations.
“All our experiences tell us we shouldn’t have two dramatically different conceptions of reality — there must be one huge overarching theory,” says Abhay Ashtekar, a physicist at Pennsylvania State University in University Park.
Finding that one huge theory is a daunting challenge. Here, Nature explores some promising lines of attack — as well as some of the emerging ideas about how to test these concepts (see ‘The fabric of reality’).
Gravity as thermodynamics
One of the most obvious questions to ask is whether this endeavour is a fool’s errand. Where is the evidence that there actually is anything more fundamental than space and time?
A provocative hint comes from a series of startling discoveries made in the early 1970s, when it became clear that quantum mechanics and gravity were intimately intertwined with thermodynamics, the science of heat.
In 1974, most famously, Stephen Hawking of the University of Cambridge, UK, showed that quantum effects in the space around a black hole will cause it to spew out radiation as if it was hot. Other physicists quickly determined that this phenomenon was quite general. Even in completely empty space, they found, an astronaut undergoing acceleration would perceive that he or she was surrounded by a heat bath. The effect would be too small to be perceptible for any acceleration achievable by rockets, but it seemed to be fundamental. If quantum theory and general relativity are correct — and both have been abundantly corroborated by experiment — then the existence of Hawking radiation seemed inescapable.
A second key discovery was closely related. In standard thermodynamics, an object can radiate heat only by decreasing its entropy, a measure of the number of quantum states inside it. And so it is with black holes: even before Hawking’s 1974 paper, Jacob Bekenstein, now at the Hebrew University of Jerusalem, had shown that black holes possess entropy. But there was a difference. In most objects, the entropy is proportional to the number of atoms the object contains, and thus to its volume. But a black hole’s entropy turned out to be proportional to the surface area of its event horizon — the boundary out of which not even light can escape. It was as if that surface somehow encoded information about what was inside, just as a two-dimensional hologram encodes a three-dimensional image.
In 1995, Ted Jacobson, a physicist at the University of Maryland in College Park, combined these two findings, and postulated that every point in space lies on a tiny ‘black-hole horizon’ that also obeys the entropy–area relationship. From that, he found, the mathematics yielded Einstein’s equations of general relativity — but using only thermodynamic concepts, not the idea of bending space-time1.
“This seemed to say something deep about the origins of gravity,” says Jacobson. In particular, the laws of thermodynamics are statistical in nature — a macroscopic average over the motions of myriad atoms and molecules — so his result suggested that gravity is also statistical, a macroscopic approximation to the unseen constituents of space and time.
In 2010, this idea was taken a step further by Erik Verlinde, a string theorist at the University of Amsterdam, who showed2 that the statistical thermodynamics of the space-time constituents — whatever they turned out to be — could automatically generate Newton’s law of gravitational attraction.
And in separate work, Thanu Padmanabhan, a cosmologist at the Inter-University Centre for Astronomy and Astrophysics in Pune, India, showed3 that Einstein’s equations can be rewritten in a form that makes them identical to the laws of thermodynamics — as can many alternative theories of gravity. Padmanabhan is currently extending the thermodynamic approach in an effort to explain the origin and magnitude of dark energy: a mysterious cosmic force that is accelerating the Universe’s expansion.
Testing such ideas empirically will be extremely difficult. In the same way that water looks perfectly smooth and fluid until it is observed on the scale of its molecules — a fraction of a nanometre — estimates suggest that space-time will look continuous all the way down to the Planck scale: roughly 10−35 metres, or some 20 orders of magnitude smaller than a proton.
But it may not be impossible. One often-mentioned way to test whether space-time is made of discrete constituents is to look for delays as high-energy photons travel to Earth from distant cosmic events such as supernovae and γ-ray bursts. In effect, the shortest-wavelength photons would sense the discreteness as a subtle bumpiness in the road they had to travel, which would slow them down ever so slightly. Giovanni Amelino-Camelia, a quantum-gravity researcher at the University of Rome, and his colleagues have found4 hints of just such delays in the photons from a γ-ray burst recorded in April. The results are not definitive, says Amelino-Camelia, but the group plans to expand its search to look at the travel times of high-energy neutrinos produced by cosmic events. He says that if theories cannot be tested, “then to me, they are not science. They are just religious beliefs, and they hold no interest for me.”
Other physicists are looking at laboratory tests. In 2012, for example, researchers from the University of Vienna and Imperial College London proposed5 a tabletop experiment in which a microscopic mirror would be moved around with lasers. They argued that Planck-scale granularities in space-time would produce detectable changes in the light reflected from the mirror (see Nature http://doi.org/njf; 2012).
Loop quantum gravity
Even if it is correct, the thermodynamic approach says nothing about what the fundamental constituents of space and time might be. If space-time is a fabric, so to speak, then what are its threads?
One possible answer is quite literal. The theory of loop quantum gravity, which has been under development since the mid-1980s by Ashtekar and others, describes the fabric of space-time as an evolving spider’s web of strands that carry information about the quantized areas and volumes of the regions they pass through6. The individual strands of the web must eventually join their ends to form loops — hence the theory’s name — but have nothing to do with the much better-known strings of string theory. The latter move around in space-time, whereas strands actually are space-time: the information they carry defines the shape of the space-time fabric in their vicinity.
Because the loops are quantum objects, however, they also define a minimum unit of area in much the same way that ordinary quantum mechanics defines a minimum ground-state energy for an electron in a hydrogen atom. This quantum of area is a patch roughly one Planck scale on a side. Try to insert an extra strand that carries less area, and it will simply disconnect from the rest of the web. It will not be able to link to anything else, and will effectively drop out of space-time.
Loop quantum gravity
This simulation shows how space evolves in loop quantum gravity. The colours of the faces of the tetrahedra indicate how much area exists at that given point, at a particular moment of time.
(available at Nature)
One welcome consequence of a minimum area is that loop quantum gravity cannot squeeze an infinite amount of curvature onto an infinitesimal point. This means that it cannot produce the kind of singularities that cause Einstein’s equations of general relativity to break down at the instant of the Big Bang and at the centres of black holes.
In 2006, Ashtekar and his colleagues reported7 a series of simulations that took advantage of that fact, using the loop quantum gravity version of Einstein’s equations to run the clock backwards and visualize what happened before the Big Bang. The reversed cosmos contracted towards the Big Bang, as expected. But as it approached the fundamental size limit dictated by loop quantum gravity, a repulsive force kicked in and kept the singularity open, turning it into a tunnel to a cosmos that preceded our own.
This year, physicists Rodolfo Gambini at the Uruguayan University of the Republic in Montevideo and Jorge Pullin at Louisiana State University in Baton Rouge reported8 a similar simulation for a black hole. They found that an observer travelling deep into the heart of a black hole would encounter not a singularity, but a thin space-time tunnel leading to another part of space. “Getting rid of the singularity problem is a significant achievement,” says Ashtekar, who is working with other researchers to identify signatures that would have been left by a bounce, rather than a bang, on the cosmic microwave background — the radiation left over from the Universe’s massive expansion in its infant moments.
Loop quantum gravity is not a complete unified theory, because it does not include any other forces. Furthermore, physicists have yet to show how ordinary space-time would emerge from such a web of information. But Daniele Oriti, a physicist at the Max Planck Institute for Gravitational Physics in Golm, Germany, is hoping to find inspiration in the work of condensed-matter physicists, who have produced exotic phases of matter that undergo transitions described by quantum field theory. Oriti and his colleagues are searching for formulae to describe how the Universe might similarly change phase, transitioning from a set of discrete loops to a smooth and continuous space-time. “It is early days and our job is hard because we are fishes swimming in the fluid at the same time as trying to understand it,” says Oriti.
Such frustrations have led some investigators to pursue a minimalist programme known as causal set theory. Pioneered by Rafael Sorkin, a physicist at the Perimeter Institute in Waterloo, Canada, the theory postulates that the building blocks of space-time are simple mathematical points that are connected by links, with each link pointing from past to future. Such a link is a bare-bones representation of causality, meaning that an earlier point can affect a later one, but not vice versa. The resulting network is like a growing tree that gradually builds up into space-time. “You can think of space emerging from points in a similar way to temperature emerging from atoms,” says Sorkin. “It doesn’t make sense to ask, ‘What’s the temperature of a single atom?’ You need a collection for the concept to have meaning.”
In the late 1980s, Sorkin used this framework to estimate9 the number of points that the observable Universe should contain, and reasoned that they should give rise to a small intrinsic energy that causes the Universe to accelerate its expansion. A few years later, the discovery of dark energy confirmed his guess. “People often think that quantum gravity cannot make testable predictions, but here’s a case where it did,” says Joe Henson, a quantum-gravity researcher at Imperial College London. “If the value of dark energy had been larger, or zero, causal set theory would have been ruled out.”
Causal dynamical triangulations
That hardly constituted proof, however, and causal set theory has offered few other predictions that could be tested. Some physicists have found it much more fruitful to use computer simulations. The idea, which dates back to the early 1990s, is to approximate the unknown fundamental constituents with tiny chunks of ordinary space-time caught up in a roiling sea of quantum fluctuations, and to follow how these chunks spontaneously glue themselves together into larger structures.
The earliest efforts were disappointing, says Renate Loll, a physicist now at Radboud University in Nijmegen, the Netherlands. The space-time building blocks were simple hyper-pyramids — four-dimensional counterparts to three-dimensional tetrahedrons — and the simulation’s gluing rules allowed them to combine freely. The result was a series of bizarre ‘universes’ that had far too many dimensions (or too few), and that folded back on themselves or broke into pieces. “It was a free-for-all that gave back nothing that resembles what we see around us,” says Loll.
Casual dynamical triangulation
Casual dynamical triangulation uses just two dimensions: one of space and one of time. The video shows two-dimensional universes generated by pieces of space assembling themselves according to quantum rules. Each colour represent a slice through the universe at particular time after the Big Bang, which is depicted as a tiny black ball.
But, like Sorkin, Loll and her colleagues found that adding causality changed everything. After all, says Loll, the dimension of time is not quite like the three dimensions of space. “We cannot travel back and forth in time,” she says. So the team changed its simulations to ensure that effects could not come before their cause — and found that the space-time chunks started consistently assembling themselves into smooth four-dimensional universes with properties similar to our own10.
Intriguingly, the simulations also hint that soon after the Big Bang, the Universe went through an infant phase with only two dimensions — one of space and one of time. This prediction has also been made independently by others attempting to derive equations of quantum gravity, and even some who suggest that the appearance of dark energy is a sign that our Universe is now growing a fourth spatial dimension. Others have shown that a two-dimensional phase in the early Universe would create patterns similar to those already seen in the cosmic microwave background.
Meanwhile, Van Raamsdonk has proposed a very different idea about the emergence of space-time, based on the holographic principle. Inspired by the hologram-like way that black holes store all their entropy at the surface, this principle was first given an explicit mathematical form by Juan Maldacena, a string theorist at the Institute of Advanced Study in Princeton, New Jersey, who published11 his influential model of a holographic universe in 1998. In that model, the three-dimensional interior of the universe contains strings and black holes governed only by gravity, whereas its two-dimensional boundary contains elementary particles and fields that obey ordinary quantum laws without gravity.
Hypothetical residents of the three-dimensional space would never see this boundary, because it would be infinitely far away. But that does not affect the mathematics: anything happening in the three-dimensional universe can be described equally well by equations in the two-dimensional boundary, and vice versa.
In 2010, Van Raamsdonk studied what that means when quantum particles on the boundary are ‘entangled’ — meaning that measurements made on one inevitably affect the other12. He discovered that if every particle entanglement between two separate regions of the boundary is steadily reduced to zero, so that the quantum links between the two disappear, the three-dimensional space responds by gradually dividing itself like a splitting cell, until the last, thin connection between the two halves snaps. Repeating that process will subdivide the three-dimensional space again and again, while the two-dimensional boundary stays connected. So, in effect, Van Raamsdonk concluded, the three-dimensional universe is being held together by quantum entanglement on the boundary — which means that in some sense, quantum entanglement and space-time are the same thing.
Or, as Maldacena puts it: “This suggests that quantum is the most fundamental, and space-time emerges from it.”
- Read straight from Nature
Πόσο αληθινό έιναι αυτό; Όποιος αναγνώστης γνωρίζει κάτι παραπάνω παρακαλώ ένα e-Μail στο email@example.com.-ΚΚ
Έχω κλείσει ραντεβού στο νοσοκομείο για ν’ αλλάξω κάποια εξαρτήματα που παρουσιάζουν σημεία κόπωσης όπως καρδιά, πνεύμονες (δεν καπνίζω), πόδια, μέση και …beep.
Αν αυτά που γράφει το άρθρο έχουν ίχνη αλήθειας ο Duncan McLeod will be most pleased!
Αφιερώνω το παρακάτω άρθρο στον αγαπητό (και σεβαστό) μου φίλο ΅Ελευθέριο Ανευλαβή. Ο λόγος που το κάνω ενέχει ψήγματα χιούμορ διότι, πάει καιρός τώρα που εγώ λέω ότι θα ζήσω για …πάντα και εκείνος ότι με περιμένει ο Θάνατος. Αν, όπως λέει το άρθρο, καταφέρουμε να φυλάξουμε για πάντα την Πληροφορία και, επειδή αποτελούμαστε από πληροφορίες ιδού η Αιώνια Ζωή που υπόσχονται οι διάφοροι «θεοί»._Κ.Κ.
Data written to a glass “memory crystal” could remain intact for a million years, according to scientists from the UK and the Netherlands who have demonstrated the technology for the first time. The data-storage technique uses a laser to alter the optical properties of fused quartz at the nanoscale. The researchers say it has the potential to store a staggering 360 terabytes of data (equivalent to 75,000 DVDs) on a standard-sized disc.
Longevity and capacity are the key factors to consider in terms of data storage, but existing options are limited. “At the moment, companies have to back up their archives every five to ten years because hard-drive memory has a relatively short lifespan,” explains Jingyu Zhang of the University of Southampton, UK, who led the team that demonstrated the new technique. Optical storage media such as DVDs are more stable, but with standard single-layer discs maxing out at 4.7 GB of data, they are an unwieldy option for vast digital archives.
Scientists have been pursuing the idea of glass as a medium for mass data storage since 1996, when it was first suggested that data could be written optically into transparent materials. By using a femtosecond laser to alter the physical structure of fused quartz, a “dot” with a different refractive index can be created to denote the binary digit one; zeros are indicated by the absence of a dot. Japanese electronics giant Hitachi succeeded in storing data using this method back in 2009, but Zhang’s team has taken the technology a step further, by recording information in 5D – the three dimensions of space that describe the physical location of the dot, and two additional dimensions that are encoded by the polarity and intensity of the beam that creates the dot.
To demonstrate the new method, Zhang’s team wrote a 300 kB digital text file into fused quartz glass using a femtosecond laser that produced extremely short and intense pulses of light at a 200 kHz repetition rate. The pulses were sent through a spatial light modulator (SLM), which split the light into 256 separate beams to create a holographic image. A specially designed laser-imprinted half-wave plate matrix was built to control the polarization of the light without the need for moving parts. The laser-imprinted dots were arranged in three planes separated by a distance of five microns, on a sliver of fused quartz, and dubbed “Superman memory crystals” after the once-fanciful technology featured in the Superman films.
The data file was read using a standard optical microscope in conjunction with a polarizing filter, to measure the way that light transmission was altered by the dots. The read-out showed each dot as a blurred spot of varying intensity, in one of four colours to indicate polarity – a level of optical data encoding that represents a significant improvement over simple 3D systems such as conventional DVDs or even Hitachi’s, according to Zhang. “Consider that when you read a DVD, while you read one spot it’s actually one bit, but in our case, it’s many more bits – 10 bits,” he explains, adding that they “expect 10 times higher reading rates too”.
Outlasting the human race
The researchers claim that their memory crystals “[open] the era of unlimited lifetime data storage.” As well as providing unprecedented capacity and high-speed reading, fused quartz is exceptionally stable and can withstand temperatures up to 1000 °C. “We think it should potentially last a million years,” enthuses Zhang, meaning the stored data will likely outlast the human race.
Xiangping Li, a physicist working on multidimensional optical data storage at Swinburne University of Technology in Hawthorn, Australia, calls the work “quite innovative”, and suggests that the estimated storage capacity would be beefed up even more if the parameters used for the fourth or fifth dimensions were less closely intertwined. “[Currently] these parameters are not orthogonal to each other, so it will create significant crosstalk…it’s a grey scale,” he explains.
Zhang’s group is designing a simple scanning laser read-out device that will enable the reading technology to be brought cheaply into homes in the near future. The same cannot be said for the writing technology, however – there needs to be a significant breakthrough before we could be saving our personal music and photograph collections to memory crystal. National labs, cloud-computing clusters and other large data-generating enterprises, on the other hand, are obvious immediate candidates for early adoption. “Museums that want to preserve information, or places like theNational Archives where they have huge numbers of documents, would really benefit,” says Zhang.
The researchers are looking to combine with industry partners to develop a higher-powered laser but, ahead of that, they plan to switch the SLM for another on the market that should increase their writing speed from kilobytes-per-second to megabytes-per-second, and are keeping a keen eye on the current development of an even better version that should offer them speeds of gigabytes-per-second.
About the author
Ceri Perkins is a science writer based in the US
από το: PhysicsWorldCom
Teleporting states between atomic gases
The macroscopic quantum spin state of caesium atoms held in a vessel has been teleported to a second vessel 50 cm away – according physicists in Denmark, Spain and the UK, who have performed the feat. Although this distance is far smaller than the 143 km record for the quantum teleportation of relatively simple states, the experiment achieves a different type of teleportation that had previously been achieved only across microscopic distances. The technique can teleport complex quantum states and could therefore have a range of technological applications – including quantum computing, long-distance quantum communication and remote sensing.
Quantum teleportation was first proposed in 1993 by Charles Bennett, of the IBM Thomas J Watson Research Center in New York, and colleagues. It allows one person (Alice) to send information about an unknown quantum state to another person (Bob) by exchanging purely classical information. It utilizes the quantum entanglement between two particles; one with Alice and one with Bob. Alice interacts the unknown quantum state with her half of the entangled state, measures the combined quantum state and sends the result through a classical channel to Bob. The act of measurement alters the state of Bob’s half of the entangled pair and this, combined with the result of Alice’s measurement, allows Bob to reconstruct the unknown quantum state.
This is usually demonstrated with discrete quantum states, such as single atomic spins that can be up, down or a superposition of these two states. In principle, however, it is possible to teleport quantum states that are effectively continuous, such as the collective spin of a large atomic ensemble. Furthermore, doing so would have interesting practical consequences for the development of technologies based on the teleportation process.
For Alice and Bob to send information using quantum teleportation, they must first be in possession of entangled particles (usually photons). Swapping entangled photons inevitably results in some being lost and this will have an effect on the reconstruction that Bob can make of Alice’s mystery quantum state. If the information being exchanged concerns a discrete state, it will be entangled with a single photon, which will either arrive or not arrive, and Bob will either make a perfect reproduction or no reproduction of the state. This is known as probabilistic quantum teleportation. If the information concerns a continuous state, it will be entangled with a pulse of light containing many photons. Some will arrive and others will not. Bob can always make a reconstruction of Alice’s quantum state but if losses are high then it will be less than perfect. This is deterministic quantum teleportation.
A key question is whether or not the fidelity with which Bob can reproduce Alice’s unknown quantum state exceeds the maximum possible fidelity achievable if Alice simply measured the state and told Bob the result – a limit imposed by the Heisenberg’s uncertainty principle. This will depend not just on the proportion of photons lost but also on other experimental parameters, such as the length of time the quantum states can be preserved for interactions between the unknown quantum state and the entangled particles.
This deterministic continuous-variable teleportation was proposed and realized in the lab by Eugene Polzik and colleagues at the Niels Bohr Institute in Copenhagen, together with researchers at the Institute of Photonic Sciences (ICFO) in Barcelona and the University of Nottingham. Their experimental set-up involves two room-temperature samples of caesium-133 gas held in glass containers and separated by about 50 cm. The aim of the experiment is to use light to teleport the collective quantum spin state of 1012 atoms from one container to the other. The team extended the life of the state by coating the insides of the containers with a special material that does not absorb angular momentum from the atoms.
Precise control over the spin states of the system was done using constant and oscillating magnetic fields. They also collaborated with theorists Christine Muschik at the ICFO and Ignacio Cirac of the Max Planck Institute for Quantum Optics, near Munich, to develop a new model of the interaction between the atoms and the light. Using these advances, they teleported multiple collective spin states between the two canisters and looked at the variance in their measurements. When they compared this with the theoretical minimum variance that could be achieved by sending the spin state information in a purely classical manner, they found that the variance from their process was lower. «We have achieved the first deterministic, atomic-to-atomic teleportation over a macroscopic distance,» says Polzik.
Hugues de Riedmatten, a quantum-optics expert at the ICFO – who was not involved with the experiment – says that the research is «very significant», describing the results as «convincing». He cautions, however, that it is «a proof of principle», saying «I think it’s a first step. If you would like to use it for doing useful things in quantum-information science, for example, you would need to transport much more complicated quantum states. It remains to be seen whether this will be possible or not.»
The research is published in Nature Physics.
About the author
Tim Wogan is a science writer based in the UK
Μα, ο Μάντης Κάλχας.
Αν αυτάείναι τα πρώτα βήματα φανταστείτε τι έρχεται όταν, ο κάθε ένας που διαθέτει τη δύναμη, τα χήματα, τα μέσα θα μπορεί να ελέγχει κάθε μορφής μηχανή – μακριά. Φανταστείτε ακόμα την μεσαία τάξη των ρομπότ που έρχεται γοργά. Τυχεροί(;) οι νέοι που θα ζήσουν την εποχή του Υδροχόου._Κ.Κ.
|Posted: 29 Apr 2013 09:00 AM PDTTο πρωτο ΕΛΛΗΝΙΚΟ αεροσκάφος »ARCHON SF 1» σε πτήσεις, όπου κάνει τον ΚΟΣΜΟ να παραμιλά και τους Αμερικανούς να τρέχουν…Αμερικανικές εταιρείες ενδιαφέρθηκαν για αεροσκάφος που έφτιαξε Έλληνας αστυνομικός στη Φλώρινα…Γιώργος Ηλιόπουλος. Δεν είναι αεροναυπηγός, ούτε μηχανικός αεροσκαφών, ούτε καν τεχνικός οποιασδήποτε ειδικότητας….Απλά είναι Αστυνομικός…με πάθος και μεράκι όμως για την αεροπλοία…είναι και ο πρόεδρος της αερολέσχης Φλώρινας. Ύστερα από κοπιαστική δουλειά και 3000 τροποποιήσεις του αρχικού σχεδίου το κατασκεύασε και σε κανονικό μέγεθος, χρησιμοποιώντας αεροπορικό αλουμίνιο και ο «Άρχων», όπως το ονόμασε, έκανε το πρώτο του ταξίδι στους αιθέρες.
Πρόκειται για ένα υπερελαφρό αεροσκάφος μόλις 200 κιλά, με κινητήρα μόλις 46 ίππων, μονοθέσιο, που μοιάζει με μαχητικό (άλλωστε τα περισσότερα μέρη του είναι κατασκευασμένα από αλουμίνιο που χρησιμοποιείτε στην αεροναυπηγική) με πολλές καινοτόμους σχεδιασμούς σε παγκόσμια κλίμακα (άν προσέξετε στις λεπτομέρειες σχεδίου του τείνει σε τεχνολογία STEALTH) και μάλιστα για αυτό το συγκεκριμένο αεροπλάνο έχει δεχθεί πάρα πολλά email και τηλεφωνήματα από αμερικανικές εταιρείες που επιθυμούν να το αξιοποιήσουν εμπορικά.
Η Πολεμική Αεροπορία, του επεφύλαξε θερμή υποδοχή…αλλά μέχρι εκεί…γιατί το κράτος που θα έπρεπε να έχει ήδη κινηθεί, κοιμάται τον ύπνο του δικαίου. Φανταστείτε, ότι για να πάρει άδεια να πετάξει από τις Ελληνικές αρχές είδε και απόειδε ο κατασκευαστής, με αποτέλεσμα να απευθυνθεί στην Ιταλία. Ναί, στην Ιταλία, γιαυτό και τα διακριτικά »νηολόγησής» του φέρουν στα αρχικά το γράμμα »Ι-Α281»… η υλικά, αλλά με απλά, καθημερινά υλικά που όλοι έχουμε στο σπίτι μας. Το πρώτο αεροπλάνο που κατασκεύασε ήταν ξύλινο και μάλιστα… ανακυκλώσιμο, αφού τα ξύλα που χρησιμοποίησε για την κατασκευή του προέρχονταν από τα παλιά του έπιπλα, το κάθισμα που τοποθέτησε στη θέση του οδηγού ήταν ένα αναπηρικό καροτσάκι, ενώ τα μηχανικά του μέρη προήλθαν από έναν παλιό κινητήρα αυτοκινήτου. Όπως ο ίδιος έχει πει, για να καταφέρει να ολοκληρώσει την κατασκευή του Λυγκιστή, όπως είναι το όνομα του εν λόγω αεροπλάνου, χρειάστηκε να ξεπεράσει και το φόβο του και τον εαυτό του, καθώς το εγχείρημά του κινδύνευε να χαρακτηριστεί ως μία φαιδρή ιστορία για έναν τρελό που προσπάθησε να φτιάξει αεροπλάνο. …. Μάλιστα, όταν πέταξε, η χαρά του, όπως λέει ο ίδιος ήταν τόσο μεγάλη που κόντεψε να πάθει καρδιακή προσβολή. Ύστερα από κοπιαστική δουλειά και 3000 τροποποιήσεις του αρχικού σχεδίου το κατασκεύασε και σε κανονικό μέγεθος, χρησιμοποιώντας αεροπορικό αλουμίνιο και ο «Άρχων» έκανε το πρώτο του ταξίδι στους αιθέρες.
Πρόκειται για ένα υπερελαφρό αεροσκάφος μόλις 200 κιλά, με κινητήρα μόλις 46 ίππων, μονοθέσιο…. Επόμενος σταθμός το μεγάλο αεροπορικό event το Sun And Fun στην Φλόριντα σε λίγες ημέρες! http://www.veteranos.gr/2013/04/stealth-video.html
Image: Katie Zhuang, Nicolelis lab, Duke University
It’s not exactly a Vulcan mind meld, but it’s not far off. Scientists have wired the brains of two rats together and shown that signals from one rat’s brain can help the second rat solve a problem it would otherwise have no clue how to solve.
The rats were in different cages with no way to communicate other than through the electrodes implanted in their brains. The transfer of information from brain to brain even worked with two rats separated by thousands of kilometers, one in a lab in North Carolina and another in a lab in Brazil.
“We basically created a computational unit out of two brains,” says neuroscientist Miguel Nicolelis of Duke University, who led the study.
Nicolelis is a leading figure in brain-machine interface research and the man behind a bold plan to develop a brain-controlled exoskeleton that would allow a paralyzed person to walk onto the field and kick a soccer ball at the opening ceremony of next year’s World Cup in Brazil.
He says the new findings could point the way to future therapies aimed at restoring movement or language after a stroke or other brain injury by using signals from a healthy part of the brian to retrain the injured area. Other researchers say it’s an interesting idea, but it’s a long way off.
But Nicolelis’s group is known for pushing the envelope. Previously, they have given monkeys an artificial sense of touch they can use to distinguish the “texture” of virtual objects. More recently, they gave rats the ability to detect normally invisible infrared light by wiring an infrared detector to a part of the brain that processes touch. All this work, Nicolelis says, is relevant to developing neural prostheses to restore sensory feedback to people with brain injuries.
In the new study, the researchers implanted small electrode arrays in two regions of the rats’ brains, one involved in planning movements, and one involved in the sense of touch.
Then they trained several rats to poke their noses and whiskers through a small opening in the wall of their enclosure to determine its width. The scientists randomly changed the width of the opening to be either narrow or wide for each trial, and the rats had to learn to touch one of two spots depending on its width. They touched a spot to the right of the opening when it was wide and the spot on the left when it was narrow. When they got it correct, they received a drink. Eventually they got it right 95 percent of the time.
Next, the team wanted to see if signals from the brain of a rat trained to do this task could help another rat in a different cage choose the correct spot to poke with its nose — even if it had no other information to go on.
They tested this idea with another group of rats that hadn’t learned the task. In this experiment, one of these new rats sat in an enclosure with two potential spots to receive a reward but without an opening in the wall. On their own, they could only guess which of the two spots would produce a rewarding drink. As expected, they got it right 50 percent of the time.
Then the researchers recorded signals from one of the trained rats as it did the nose-poke task and used those signals to stimulate the second, untrained rat’s brain in a similar pattern. When it received this stimulation, the second rat’s performance climbed to 60 or 70 percent. That’s not nearly as good as the rats who could actually use their sense of touch to solve the problem, but it’s impressive given that the only information they had about which spot to chose came from another animal’s brain, Nicolelis says.
Both rats had to make the correct choice, otherwise neither one got a reward. When that happened, the first rat tended to make its decision more quickly on the next trial, and its brain activity seemed to send a clearer signal to the second rat, the team reports today in Scientific Reports. That suggests to Nicolelis that the rats were learning to cooperate.
The brain-to-brain communication link enables the rats to collaborate in a novel way, he says. ”The animals compute by mutual experience,” he said. ”It’s a computer that evolves, that’s not set by instructions or an algorithm.”
From an engineering perspective, the work is a remarkable demonstration that animals can use brain-to-brain communication to solve a problem, said Mitra Hartmann, a biomedical engineer who studies rats’ sense of touch at Northwestern University. “This is a first, to my knowledge, although the enabling technology has been around for a while.”
“From a scientific point of view, the study is noteworthy for the large number of important questions it raises, for example, what allows neurons to be so ‘plastic’ that the animal can learn to interpret the meaning of a particular stimulation pattern,” Hartmann said.
“It’s a pretty cool idea that they’re in tune with each other and working together,” said neuroscientist Bijan Pesaran of New York University. But Pesaran says he could use some more convincing that this is what’s actually going on. For example, he’d like to see the researchers extend the experiment to see if the rats on the receiving end of the brain-to-brain communication link could improve their performance even more. ”If you could see them learning to do it better and faster, then I’d really be impressed.”
Pesaran says he’s open to the idea that brain-to-brain communication could one day be used to rehabilitate brain injury patients, but he thinks it might be possible to accomplish the same thing by stimulating the injured brain with computer-generated patterns of activity. ”I don’t get why you’d need another brain to do that,” he said.