Physicists have been searching for the Higgs boson for nearly years. InJuly 2012, two collaborations at theLarge Hadron Collider at CERN announced the discovery of a new particle that meets the general expectations for a Higgs boson. Butis this particle the final piece of thestandard model, or something more exotic?
One important test is the parity of the particle: how its mirror image behaves. In a mirror, even-parity particles look the same, whereas odd-parity particles appear reversed. The standard model Higgs boson is a scalar, a spin- even-parity particle. But there are many models that include a spin- odd-parity particle known as a pseudoscalar.
For the first time, the CMS Collaboration at the LHC has placedconstraints on this possibility. Theystudy decays of the new particle to a pair of bosons, each of which, in turn, decays to a pair of leptons. They analyze the angular distribution of the leptons under the assumption that the new particle is spin , and find that the odd-parity pseudoscalar scenario is disfavored, having an effective statistical value of only . So all the evidence thus far is consistent withthe new particle being the standardmodel Higgs boson. – Robert Garisto
Physics and Mathematics
Wednesday, 8 May 2013
Monday, 10 September 2012
Biggest Numbers In The Universe
There are numbers out there that are so enormously,
impossibly vast that to even write them down would require the entire
universe. But here's the really crazy thing...some of these
incomprehensibly huge numbers are crucial for understanding the world.
When I say "the biggest number in the universe", what I really mean is the biggest meaningful number, the largest possible number that is in some way useful. There are lots of contenders for this title, but I'll warn you now: there is a very real risk that trying to understand all this will blow your mind. But then, with extreme math, that's half the fun.
To that
end, mathematician Edward Kasner (pictured) took his two nephews, Milton
and Edwin Sirotta, on a walk through the New Jersey Palisades. He asked
them for any ideas they might have, and the then nine-year-old Milton
proposed "googol." Where he got this particular word is unknown, but
Kasner decided that 10^100 - or, the number one followed by a hundred
zeroes - would henceforth be known as a googol.
But young Milton wasn't finished - he also proposed an even larger number, the googolplex. This number, according to Milton, was 1 followed by as many zeroes as you could write before you got tired. Though a charming idea, Kasner decided a more technical definition was needed. As he explained in his 1940 book Mathematics and the Imagination, Milton's definition left open the dicey possibility that a random buffoon could become a greater mathematician than Albert Einstein simply by possessing greater endurance.
So, Kasner decided a googolplex would be 10^googol, or 1 followed by a googol of zeroes. To put that another way - and in similar notation to how we'll be dealing with various other numbers we'll be talking about - a googolplex is 10^10^100. To put that in some mindbending perspective, Carl Sagan once pointed out that it would physically impossible to write down all the zeroes in a googolplex, because there simply isn't enough room in the universe. If you filled the entire volume of the observable universe with fine dust particles roughly 1.5 micrometers in size, then the number of different combinations in which you could arrange and number these particles would be about one googolplex.
Linguistically speaking, googol and googolplex are probably the two biggest meaningful numbers (at least in English), but as we're about to find out, there's no end of ways to define "meaningful."
If we're
going to talk about the largest meaningful number, there's a
not terrible argument that that really means we need to find the largest
number with any real world significance. We can start the bidding with
the current human population, which is currently about 6.92 billion. The
global economy in 2010 is estimated to have been about $61.96 trillion,
but both of those are dwarfed by the roughly 100 quadrillion cells that
make up the human body. Of course, none of these can compare to the
total number of particles in the universe, which is generally thought to
be around 10^80 - a number so large that our language doesn't have an
agreed upon word for it.
We can play around a bit with measurements as we get larger and larger - for instance, the weight of the Sun in tons will produce a smaller value than if you measure it in pounds. The fairest way to do this is to use the Planck units, which are the smallest possible measurements for which the laws of physics still hold. For instance, the age of the universe in Planck time is about 8 * 10^60. If we go right back to the first unit of Planck time after the Big Bang, we find the density of the universe was 5.1 * 10^96. We're getting bigger, but we still haven't even reached a googol.
The largest number with any real world application - or, in this case, real worlds application - is probably 10^10^10^7, which is one recent estimate of the number of universes in the multiverse. That number is so huge that the human brain would be literally unable to perceive all those different universes, as the mind is only capable of roughly 10^10^16 configurations. Actually, that number is probably the biggest with any practical application, assuming you don't buy into the whole multiverse idea. But there are still far larger numbers lurking out there. But in order to find them, we're going to need to venture in the realm of pure mathematics, and there's no better place to start than with the prime numbers.
Obviously, we can extend this quite a bit further. 100, for instance, is really just 2*2*5*5, which means that in a hypothetical world where our knowledge of numbers only went up to 5, mathematicians could still express the number 100. But the very next number101 is prime, which means the only way to express it is to have direct knowledge of its existence. This means that the largest known prime numbers are important in a way that, say, a googol - which is ultimately just a bunch of 2's and 5's multiplied together - really isn't. And, because prime numbers are essentially random, there's no known way to predict what impossibly large number will actually be prime. To this day, a discovery of a new prime number is a big deal.
Ancient Greek mathematicians understood the concept of prime
numbers at least as far back as 500 BCE, but 2000 years later people
still only knew which numbers were prime up to about 750. Thinkers as
far back as Euclid saw a potential shortcut, but it wouldn't be until
the Renaissance that mathematicians could really put this in practice.
These are known as the Mersenne numbers, named after 17th century French
scholar Marin Mersenne. The idea is simple enough: a Mersenne prime is
any number of the form 2^n-1. So, for instance, 2^2 - 1 = 4 - 1 = 3,
which is prime, and the same is true for 2^5 - 1 = 32 - 1 = 31.
It's much quicker and easier to identify Mersenne primes than any other type of prime, and computers have been hard at work looking for them for the past six decades. Until 1952, the largest known prime number was 2^127 - 1, a number with 39 digits. That year, computers determined that 2^521 - 1 is prime, and that number has 157 digits, which already makes it far bigger than a googol.
Computers have been on the hunt ever since, and currently the 47th Mersenne prime is the largest known to humanity. Discovered in 2008, it is 2^43,112,609 - 1, which is a number with nearly 13 million digits. That's the largest known number that can't be expressed in terms of any smaller numbers - although if you want to help find an even bigger Mersenne prime, you (and your computer) are always welcome to join the search.
I'll spare you the more complex math - we've got plenty still to come anyway - but the gist of the function is this: for any given integer x, it's possible to estimate how many prime numbers there are that are smaller than x. For instance, if x = 1,000, the function predicts that there should be 178 prime numbers; if x = 10,000, there are 1,246 primes smaller than it; and if x = 1,000,000, then there are 78,628 smaller numbers that are prime.
Here's the thing though - prime numbers really are irregular, and so this is just a close approximation of the actual number of primes. In reality, we know that there are 168 primes smaller than 1,000, 1,229 primes smaller than 10,000, and 78,498 primes smaller than 1,000,000. It's an excellent estimate, to be sure, but it's always just an estimate...and, more specifically, an overestimate.
In all known cases up to about 10^22, the prime-counting function slightly overestimates the actual number of primes smaller than x. Mathematicians once thought that this would be the case all the way up to infinity - it certainly holds true for some unimaginably huge quantities - but in 1914 John Edensor Littlewood proved that, at some unknown, incomprehensibly vast figure, the prime-counting function would start providing an underestimate of the number of primes, and then the function would switch back between over- and underestimates an infinite number of times.
The hunt was on for the crossover point, and that's where
Stanley Skewes (pictured) makes his entrance. In 1933, he proved that
the upper bound for when the prime-counting function first becomes an
understimate is 10^10^10^34. It's hard to really comprehend in even the
most abstract sense what a number like that actually is, and to that
point it was easily the largest number ever used in a serious
mathematical proof. Since then, mathematicians have been able to reduce
the upper limit to the relatively tiny figure of about 10^316, but the
original figure remains known as Skewes' number.
So just how big is 10^10^10^34, a number that dwarfs even the mighty googolplex? In The Penguin Dictionary of Curious and Interesting Numbers, David Wells relates one way in which mathematician G.H. Hardy managed to conceptualize the size of Skewes' Number:
Now, let's look at 3^3, which is 27. While we can't really intuitively understand what 27 is in the same way that we can for 3, it's perfectly easy to visualize what 27 of something is. So far, so good. But what if we go on to 3^3^3? That's equal to 3^27, or 7,625,597,484,987. We're well past the point of being able to visualize that amount as anything other than a generically large number - we lose the ability to comprehend the individual parts somewhere around a million. (Admittedly, it would take an insanely long amount of time to actually count a million of anything, but the point is that we're still able to perceive it.)
Even so, while we can't visualize what 3^3^3 is, we're at least able to understand in broad terms what 7.6 trillion is, perhaps by comparing it to something like the US's GDP. We've moved from intuition to visualization to mere understanding, but at least we still have some grasp on what the number is. That's about to change, as we move another rung up the ladder.
For this, we'll need to switch to a notation invented by Donald E. Knuth, known as up-arrow notation. In this notation, 3^3^3 can be rewritten as 3^^3. When we then move to 3^^^3, the value that we're talking about is equal to 3^^(3^^3). This is equal to 3^3^3^...^3^3^3, where there are a total of 7,625,597,484,987 terms. We've now well and truly blasted past all the other numbers we've discussed. After all, even the biggest of those had just three or four terms in the exponential series. For instance, even the super-Skewes' number was "just" 10^10^10^963 - even adjusting for the fact that those are all much larger numbers than 3, it's still absolutely nothing compared to an exponent tower with 7.6 trillion terms.
Obviously, there's no way to even begin to comprehend a number so huge...and yet, the process by which it's created can still be understood. We might not be able to grasp the actual number that is produced by an exponent tower with 7.6 trillion 3's in it, but we can basically visualize an exponent tower with that many terms in it, and indeed a decent supercomputer would be able to store the tower, even if it couldn't begin to calculate its actual value.
This is getting increasingly abstract, but it's only going to get worse. You might think that 3^^^^3 is an exponent tower of 3's that is 3^^^3 long (indeed, in a previous version of this post, I made precisely that mistake), but that is just 3^^^4. In other words, imagine you had the ability to calculate the precise value of an exponential tower of 3's that was 7,625,597,484,987 terms long, and then you took that value and created a new tower with that many 3's in it...that gets you 3^^^4.
Repeating that process with each successive number until you've done it 3^^^3 times will, at last, get you to 3^^^^3. This is a number that is just incomprehensibly vast, but at least the steps involved can still sort of be grasped, if we take things very slowly. We can no longer understand the number or visualize the procedure that would create it, but at least we can understand the basic procedure, if only in the vaguest possible terms.
Now prepare for your mind to be really blown.
Here's
how you get to Graham's number, which holds a place in the Guinness
Book of World Records as the largest number ever used in a
mathematical proof. It's utterly impossible to imagine how vast Graham's
number is, and it honestly isn't much easier to explain exactly what
it is. Basically, Graham's number comes into play when dealing with
hypercubes, which is a theoretical geometric shape with more than three
dimensions. Mathematician Ronald Graham (pictured, awesomely) wanted to
figure out what would be the smallest number of dimensions needed for
certain properties of the hypercube to remain stable. (Sorry to be so
vague in explaining this, but I'm pretty sure we'd all need to get at
least two graduate degrees in mathematics before we get any more
specific.)
In any event, Graham's number is the upper limit for this minimum number of dimensions. And just how big is this particular upper bound? Well, let's go back to 3^^^^3, a number so larger that we can only understand the procedure behind it in the vaguest of senses. Now, instead of simply jumping up one more level to 3^^^^^3, we're going to consider the number 3^^....^^3, in which there are 3^^^^3 arrows between those two threes. At this point, we're far beyond even the tiniest possible comprehension of what a number like this is, or even how you would go about calculating it.
Now repeat that process 62 more times.
That, ladies and gentlemen, is Graham's number, a number that is about 64 orders of magnitude past the point of human comprehension. This is a number that is so much greater than any number you could possibly imagine - hell, it's much larger than any infinity that you could ever hope to imagine - that it simply defies even the most abstract of descriptions.
But here's the weird thing. Because Graham's number is basically just a bunch of 3's multiplied together, that means that we can know some of its properties without actually calculating the whole thing. We can't represent Graham's number with any familiar notation - even if we used the entire universe to write it down - but I can tell you right now what the last twelve digits of Graham's Number are: 262,464,195,387. And that's nothing - we know at least the last 500 digits of Graham's number.
Of course, it's worth remembering that this number is just an upper limit for Graham's original problem. It's possible that the actual number of dimensions that you need for the properties to hold is much, much smaller. In fact, back in the 1980s, the considered opinion of most experts in this area was that the actual answer was just six - a number so tiny that we can understand it on an intuitive level. Since then, the lower limit has been raised to 13, but there's still a very good chance that the actual solution to Graham's problem isn't anywhere near as big as Graham's number.
And yet, there's still something even bigger out there, something so big that the term ceases to have all meaning: infinity. So join us next week for part two of our odyssey into the largest numbers imaginable, as we examine all the many flavors of infinity. Until then, I leave you with this amazing quote attributed to Douglas Reay:
When I say "the biggest number in the universe", what I really mean is the biggest meaningful number, the largest possible number that is in some way useful. There are lots of contenders for this title, but I'll warn you now: there is a very real risk that trying to understand all this will blow your mind. But then, with extreme math, that's half the fun.
Googol and Googolplex
We might as well begin with what are quite probably the two largest numbers you've ever heard of, and are in fact the two largest numbers with commonly accepted definitions in the English language. (There's a fairly robust nomenclature available for naming numbers as high as you want to go, but you won't find these in dictionaries at the present time.) The googol, which has since become world famous (albeit misspelled) in the form of Google, began life in 1920 as a way to get children interested in large numbers.
But young Milton wasn't finished - he also proposed an even larger number, the googolplex. This number, according to Milton, was 1 followed by as many zeroes as you could write before you got tired. Though a charming idea, Kasner decided a more technical definition was needed. As he explained in his 1940 book Mathematics and the Imagination, Milton's definition left open the dicey possibility that a random buffoon could become a greater mathematician than Albert Einstein simply by possessing greater endurance.
So, Kasner decided a googolplex would be 10^googol, or 1 followed by a googol of zeroes. To put that another way - and in similar notation to how we'll be dealing with various other numbers we'll be talking about - a googolplex is 10^10^100. To put that in some mindbending perspective, Carl Sagan once pointed out that it would physically impossible to write down all the zeroes in a googolplex, because there simply isn't enough room in the universe. If you filled the entire volume of the observable universe with fine dust particles roughly 1.5 micrometers in size, then the number of different combinations in which you could arrange and number these particles would be about one googolplex.
Linguistically speaking, googol and googolplex are probably the two biggest meaningful numbers (at least in English), but as we're about to find out, there's no end of ways to define "meaningful."
The Real World

We can play around a bit with measurements as we get larger and larger - for instance, the weight of the Sun in tons will produce a smaller value than if you measure it in pounds. The fairest way to do this is to use the Planck units, which are the smallest possible measurements for which the laws of physics still hold. For instance, the age of the universe in Planck time is about 8 * 10^60. If we go right back to the first unit of Planck time after the Big Bang, we find the density of the universe was 5.1 * 10^96. We're getting bigger, but we still haven't even reached a googol.
The largest number with any real world application - or, in this case, real worlds application - is probably 10^10^10^7, which is one recent estimate of the number of universes in the multiverse. That number is so huge that the human brain would be literally unable to perceive all those different universes, as the mind is only capable of roughly 10^10^16 configurations. Actually, that number is probably the biggest with any practical application, assuming you don't buy into the whole multiverse idea. But there are still far larger numbers lurking out there. But in order to find them, we're going to need to venture in the realm of pure mathematics, and there's no better place to start than with the prime numbers.
The Mersenne Primes
Part of the difficulty here is coming up with a good definition for what a "meaningful" number actually is. One way to think about is in terms of prime and composite numbers. A prime number, as you probably remember from high school math, is any number whose only divisors are 1 and itself. So, 2, 3, and 5 are all prime numbers, while 4 (2*2) and 6 (2*3) are both composite numbers. This means that any composite number can ultimately be reduced to its prime divisors. In a sense, a number like 5 is more important than, say, 4 because there's no way to express it in terms of smaller numbers.Obviously, we can extend this quite a bit further. 100, for instance, is really just 2*2*5*5, which means that in a hypothetical world where our knowledge of numbers only went up to 5, mathematicians could still express the number 100. But the very next number101 is prime, which means the only way to express it is to have direct knowledge of its existence. This means that the largest known prime numbers are important in a way that, say, a googol - which is ultimately just a bunch of 2's and 5's multiplied together - really isn't. And, because prime numbers are essentially random, there's no known way to predict what impossibly large number will actually be prime. To this day, a discovery of a new prime number is a big deal.

It's much quicker and easier to identify Mersenne primes than any other type of prime, and computers have been hard at work looking for them for the past six decades. Until 1952, the largest known prime number was 2^127 - 1, a number with 39 digits. That year, computers determined that 2^521 - 1 is prime, and that number has 157 digits, which already makes it far bigger than a googol.
Computers have been on the hunt ever since, and currently the 47th Mersenne prime is the largest known to humanity. Discovered in 2008, it is 2^43,112,609 - 1, which is a number with nearly 13 million digits. That's the largest known number that can't be expressed in terms of any smaller numbers - although if you want to help find an even bigger Mersenne prime, you (and your computer) are always welcome to join the search.
Skewes' Number
Let's stay with the prime numbers for a second. As I said before, the primes are fundamentally irregular, which means there's no way to predict what the next prime will be. Mathematicians have had to go to some pretty fantastic lengths to come up with any way to predict future primes in even the vaguest of senses. The most successful of these attempts is probably the prime-counting function, which the legendary mathematician Carl Friedrich Gauss came up with in the late 1700s.I'll spare you the more complex math - we've got plenty still to come anyway - but the gist of the function is this: for any given integer x, it's possible to estimate how many prime numbers there are that are smaller than x. For instance, if x = 1,000, the function predicts that there should be 178 prime numbers; if x = 10,000, there are 1,246 primes smaller than it; and if x = 1,000,000, then there are 78,628 smaller numbers that are prime.
Here's the thing though - prime numbers really are irregular, and so this is just a close approximation of the actual number of primes. In reality, we know that there are 168 primes smaller than 1,000, 1,229 primes smaller than 10,000, and 78,498 primes smaller than 1,000,000. It's an excellent estimate, to be sure, but it's always just an estimate...and, more specifically, an overestimate.
In all known cases up to about 10^22, the prime-counting function slightly overestimates the actual number of primes smaller than x. Mathematicians once thought that this would be the case all the way up to infinity - it certainly holds true for some unimaginably huge quantities - but in 1914 John Edensor Littlewood proved that, at some unknown, incomprehensibly vast figure, the prime-counting function would start providing an underestimate of the number of primes, and then the function would switch back between over- and underestimates an infinite number of times.

So just how big is 10^10^10^34, a number that dwarfs even the mighty googolplex? In The Penguin Dictionary of Curious and Interesting Numbers, David Wells relates one way in which mathematician G.H. Hardy managed to conceptualize the size of Skewes' Number:
Hardy thought it 'the largest number which has ever served any definite purpose in mathematics', and suggested that if a game of chess was played with all the particles in the universe as pieces, one move being the interchange of a pair of particles, and the game terminating when the same position recurred for the 3rd time, the number of possible games would be about Skewes' number.One last thing before we move on: the Skewes' number we've been talking about is the smaller of the two. There's another Skewes' number that the mathematician demonstrated in 1955. The first number relies on something called the Riemann hypothesis being true - it's a particularly complex bit of mathematics that remains unproven but is massively helpful when it comes to prime numbers. Still, if the Riemann hypothesis is false, Skewes found that the crossover point jumps all the way up to 10^10^10^963.
A Matter of Magnitude
Before we get to the number that makes even Skewes' number look tiny, we need to talk a little bit about scale, because otherwise there's really no way to appreciate where we're about to go. Let's first look at the number 3 - it's a tiny number, so small that humans can actually have an intuitive understanding of what it means. There are very few numbers that fit that description, as anything beyond about six stops being a distinct number and starts being "several", "many", and so on.Now, let's look at 3^3, which is 27. While we can't really intuitively understand what 27 is in the same way that we can for 3, it's perfectly easy to visualize what 27 of something is. So far, so good. But what if we go on to 3^3^3? That's equal to 3^27, or 7,625,597,484,987. We're well past the point of being able to visualize that amount as anything other than a generically large number - we lose the ability to comprehend the individual parts somewhere around a million. (Admittedly, it would take an insanely long amount of time to actually count a million of anything, but the point is that we're still able to perceive it.)
Even so, while we can't visualize what 3^3^3 is, we're at least able to understand in broad terms what 7.6 trillion is, perhaps by comparing it to something like the US's GDP. We've moved from intuition to visualization to mere understanding, but at least we still have some grasp on what the number is. That's about to change, as we move another rung up the ladder.
For this, we'll need to switch to a notation invented by Donald E. Knuth, known as up-arrow notation. In this notation, 3^3^3 can be rewritten as 3^^3. When we then move to 3^^^3, the value that we're talking about is equal to 3^^(3^^3). This is equal to 3^3^3^...^3^3^3, where there are a total of 7,625,597,484,987 terms. We've now well and truly blasted past all the other numbers we've discussed. After all, even the biggest of those had just three or four terms in the exponential series. For instance, even the super-Skewes' number was "just" 10^10^10^963 - even adjusting for the fact that those are all much larger numbers than 3, it's still absolutely nothing compared to an exponent tower with 7.6 trillion terms.
Obviously, there's no way to even begin to comprehend a number so huge...and yet, the process by which it's created can still be understood. We might not be able to grasp the actual number that is produced by an exponent tower with 7.6 trillion 3's in it, but we can basically visualize an exponent tower with that many terms in it, and indeed a decent supercomputer would be able to store the tower, even if it couldn't begin to calculate its actual value.
This is getting increasingly abstract, but it's only going to get worse. You might think that 3^^^^3 is an exponent tower of 3's that is 3^^^3 long (indeed, in a previous version of this post, I made precisely that mistake), but that is just 3^^^4. In other words, imagine you had the ability to calculate the precise value of an exponential tower of 3's that was 7,625,597,484,987 terms long, and then you took that value and created a new tower with that many 3's in it...that gets you 3^^^4.
Repeating that process with each successive number until you've done it 3^^^3 times will, at last, get you to 3^^^^3. This is a number that is just incomprehensibly vast, but at least the steps involved can still sort of be grasped, if we take things very slowly. We can no longer understand the number or visualize the procedure that would create it, but at least we can understand the basic procedure, if only in the vaguest possible terms.
Now prepare for your mind to be really blown.
Graham's Number

In any event, Graham's number is the upper limit for this minimum number of dimensions. And just how big is this particular upper bound? Well, let's go back to 3^^^^3, a number so larger that we can only understand the procedure behind it in the vaguest of senses. Now, instead of simply jumping up one more level to 3^^^^^3, we're going to consider the number 3^^....^^3, in which there are 3^^^^3 arrows between those two threes. At this point, we're far beyond even the tiniest possible comprehension of what a number like this is, or even how you would go about calculating it.
Now repeat that process 62 more times.
That, ladies and gentlemen, is Graham's number, a number that is about 64 orders of magnitude past the point of human comprehension. This is a number that is so much greater than any number you could possibly imagine - hell, it's much larger than any infinity that you could ever hope to imagine - that it simply defies even the most abstract of descriptions.
But here's the weird thing. Because Graham's number is basically just a bunch of 3's multiplied together, that means that we can know some of its properties without actually calculating the whole thing. We can't represent Graham's number with any familiar notation - even if we used the entire universe to write it down - but I can tell you right now what the last twelve digits of Graham's Number are: 262,464,195,387. And that's nothing - we know at least the last 500 digits of Graham's number.
Of course, it's worth remembering that this number is just an upper limit for Graham's original problem. It's possible that the actual number of dimensions that you need for the properties to hold is much, much smaller. In fact, back in the 1980s, the considered opinion of most experts in this area was that the actual answer was just six - a number so tiny that we can understand it on an intuitive level. Since then, the lower limit has been raised to 13, but there's still a very good chance that the actual solution to Graham's problem isn't anywhere near as big as Graham's number.
Towards Infinity
So, are there numbers even bigger than Graham's number? Well, of course there are - there's Graham's number + 1, for a start. As for meaningful numbers...well, there are some fiendishly complicated areas of math (particularly an area known as combinatorics) and computer science that do feature numbers even bigger than Graham's number. But we've pretty much reached the limit of what I could ever hope to sensibly explain. For those foolhardy enough to delve still further, you can check out some of the additional reading at your own peril.And yet, there's still something even bigger out there, something so big that the term ceases to have all meaning: infinity. So join us next week for part two of our odyssey into the largest numbers imaginable, as we examine all the many flavors of infinity. Until then, I leave you with this amazing quote attributed to Douglas Reay:
I have this vision of hoards of shadowy numbers lurking out there in the dark, beyond the small sphere of light cast by the candle of reason. They are whispering to each other; plotting who knows what. Perhaps they don't like us very much for capturing their smaller brethren with our minds. Or perhaps they just live uniquely numberish lifestyles, out there beyond our ken.
Scientists Cast Doubt On Heisenberg's Uncertainty Principle
Werner Heisenberg's uncertainty principle, formulated by the
theoretical physicist in 1927, is one of the cornerstones of quantum
mechanics. In its most familiar form, it says that it is impossible to
measure anything without disturbing it. For instance, any attempt to
measure a particle's position must randomly change its speed.

Heisenberg's
gamma-ray microscope for locating an electron (shown in blue). The
incoming gamma ray (shown in green) is scattered by the electron up into
the microscope's aperture angle ¸. The scattered gamma-ray is shown in
red. Classical optics shows that the electron position can be resolved
only up to an uncertainty ”x that depends on ¸ and the wavelength » of
the incoming light. (Credit: By parri (Wikimedia commons)
The principle has bedeviled quantum physicists for nearly a century,
until recently, when researchers at the University of Toronto
demonstrated the ability to directly measure the disturbance and confirm
that Heisenberg was too pessimistic.
"We designed an apparatus to measure a property -- the polarization -- of a single photon. We then needed to measure how much that apparatus disturbed that photon," says Lee Rozema, a Ph.D. candidate in Professor Aephraim Steinberg's quantum optics research group at U of T, and lead author of a study published this week in Physical Review Letters.
"To do this, we would need to measure the photon before the apparatus but that measurement would also disturb the photon," Rozema says.
In order to overcome this hurdle, Rozema and his colleagues employed a technique known as weak measurement wherein the action of a measuring device is weak enough to have an imperceptible impact on what is being measured. Before each photon was sent to the measurement apparatus, the researchers measured it weakly and then measured it again afterwards, comparing the results. They found that the disturbance induced by the measurement is less than Heisenberg's precision-disturbance relation would require.
"Each shot only gave us a tiny bit of information about the disturbance, but by repeating the experiment many times we were able to get a very good idea about how much the photon was disturbed," says Rozema.
The findings build on recent challenges to Heisenberg's principle by scientists the world over. Nagoya University physicist Masanao Ozawa suggested in 2003 that Heisenberg's uncertainty principle does not apply to measurement, but could only suggest an indirect way to confirm his predictions. A validation of the sort he proposed was carried out last year by Yuji Hasegawa's group at the Vienna University of Technology. In 2010, Griffith University scientists Austin Lund and Howard Wiseman showed that weak measurements could be used to characterize the process of measuring a quantum system. However, there were still hurdles to clear as their idea effectively required a small quantum computer, which is difficult to build.
"In the past, we have worked experimentally both on implementing weak measurements, and using a technique called 'cluster state quantum computing' to simplify building quantum computers. The combination of these two ideas led to the realization that there was a way to implement Lund and Wiseman's ideas in the lab," says Rozema.
It is often assumed that Heisenberg's uncertainty principle applies to both the intrinsic uncertainty that a quantum system must possess, as well as to measurements. These results show that this is not the case and demonstrate the degree of precision that can be achieved with weak-measurement techniques.
"The results force us to adjust our view of exactly what limits quantum mechanics places on measurement," says Rozema. "These limits are important to fundamental quantum mechanics and also central in developing 'quantum cryptography' technology, which relies on the uncertainty principle to guarantee that any eavesdropper would be detected due to the disturbance caused by her measurements."
"The quantum world is still full of uncertainty, but at least our attempts to look at it don't have to add as much uncertainty as we used to think!"
The findings are reported in the paper "Violation of Heisenberg's Measurement-Disturbance Relationship by Weak Measurements." The research is supported by funding from Natural Sciences and Engineering Research Council of Canada and the Canadian Institute for Advanced Research
"We designed an apparatus to measure a property -- the polarization -- of a single photon. We then needed to measure how much that apparatus disturbed that photon," says Lee Rozema, a Ph.D. candidate in Professor Aephraim Steinberg's quantum optics research group at U of T, and lead author of a study published this week in Physical Review Letters.
"To do this, we would need to measure the photon before the apparatus but that measurement would also disturb the photon," Rozema says.
In order to overcome this hurdle, Rozema and his colleagues employed a technique known as weak measurement wherein the action of a measuring device is weak enough to have an imperceptible impact on what is being measured. Before each photon was sent to the measurement apparatus, the researchers measured it weakly and then measured it again afterwards, comparing the results. They found that the disturbance induced by the measurement is less than Heisenberg's precision-disturbance relation would require.
"Each shot only gave us a tiny bit of information about the disturbance, but by repeating the experiment many times we were able to get a very good idea about how much the photon was disturbed," says Rozema.
The findings build on recent challenges to Heisenberg's principle by scientists the world over. Nagoya University physicist Masanao Ozawa suggested in 2003 that Heisenberg's uncertainty principle does not apply to measurement, but could only suggest an indirect way to confirm his predictions. A validation of the sort he proposed was carried out last year by Yuji Hasegawa's group at the Vienna University of Technology. In 2010, Griffith University scientists Austin Lund and Howard Wiseman showed that weak measurements could be used to characterize the process of measuring a quantum system. However, there were still hurdles to clear as their idea effectively required a small quantum computer, which is difficult to build.
"In the past, we have worked experimentally both on implementing weak measurements, and using a technique called 'cluster state quantum computing' to simplify building quantum computers. The combination of these two ideas led to the realization that there was a way to implement Lund and Wiseman's ideas in the lab," says Rozema.
It is often assumed that Heisenberg's uncertainty principle applies to both the intrinsic uncertainty that a quantum system must possess, as well as to measurements. These results show that this is not the case and demonstrate the degree of precision that can be achieved with weak-measurement techniques.
"The results force us to adjust our view of exactly what limits quantum mechanics places on measurement," says Rozema. "These limits are important to fundamental quantum mechanics and also central in developing 'quantum cryptography' technology, which relies on the uncertainty principle to guarantee that any eavesdropper would be detected due to the disturbance caused by her measurements."
"The quantum world is still full of uncertainty, but at least our attempts to look at it don't have to add as much uncertainty as we used to think!"
The findings are reported in the paper "Violation of Heisenberg's Measurement-Disturbance Relationship by Weak Measurements." The research is supported by funding from Natural Sciences and Engineering Research Council of Canada and the Canadian Institute for Advanced Research
Sunday, 2 September 2012
Which is better Pattern Lock or Numeric Password Lock ?
Android users know very well what Pattern Lock is. It is basically a kind of lock to unlock which you have to draw a pattern between nine dots. Sometimes we have a question Which one is better and secure Pattern Lock or Numeric Password ?
Yesterday I was thinking the same. And with the help of simple 'Permutations and Combinations' I was able to calculate how many maximum attempts are needed to unlock a Pattern Lock and Numeric Password Lock.
More the number of attempts needed more secure it is. And the results were :
Pattern Lock - 986,400 attempts needed
Numeric Password Lock - 2,605,680 attempts needed
As Numeric Password Lock needs more attempts to unlock, It is Better and Secure than Pattern Lock.
Pattern Lock - 986,400 attempts needed
Numeric Password Lock - 2,605,680 attempts needed
As Numeric Password Lock needs more attempts to unlock, It is Better and Secure than Pattern Lock.
Could we ever go back in time?
Time machines are commonly seen in science fiction films and books, but no one knows how to build one.
However, it is possible to slow down time by travelling very fast relative to someone who is stationary. This was predicted by Einstein’s theory of relativity at thebeginning of the 20th century and has since been proved many times. One of the best examples of this was demonstrated by putting a very accurate clock on board a passenger jet.
Another identical clock was kept on the ground and synchronised with the clock on the plane. After a number of long distance flights the two clocks were compared and the one that had been on the plane was running behind the clock that had stayed onthe ground. The difference between the two clocks was exactly the difference predicted by the theory of relativity.
It is important to understand that this slowing down of time depends on the speed you are travelling at relative to someone else. None of the passengers on the plane would have noticed anything strange - as far as they are concerned time is passing as usual. It is only when they get off the plane and comparetheir watches with someone who has been stationary relative to them that they notice a difference. Passenger jets fly at about 600 miles per hour which seems pretty fast. However, after the experiment described above the difference in time between the two clocks was less than a billionth of asecond, which is why very accurateclocks were needed.
If you could travel at speeds close to the speed of light (about 186,000miles a second) time would slow down significantly, from the perspective of someone who is not moving. Unfortunately we do not know how to build rockets that fast!
However, it is possible to slow down time by travelling very fast relative to someone who is stationary. This was predicted by Einstein’s theory of relativity at thebeginning of the 20th century and has since been proved many times. One of the best examples of this was demonstrated by putting a very accurate clock on board a passenger jet.
Another identical clock was kept on the ground and synchronised with the clock on the plane. After a number of long distance flights the two clocks were compared and the one that had been on the plane was running behind the clock that had stayed onthe ground. The difference between the two clocks was exactly the difference predicted by the theory of relativity.
It is important to understand that this slowing down of time depends on the speed you are travelling at relative to someone else. None of the passengers on the plane would have noticed anything strange - as far as they are concerned time is passing as usual. It is only when they get off the plane and comparetheir watches with someone who has been stationary relative to them that they notice a difference. Passenger jets fly at about 600 miles per hour which seems pretty fast. However, after the experiment described above the difference in time between the two clocks was less than a billionth of asecond, which is why very accurateclocks were needed.
If you could travel at speeds close to the speed of light (about 186,000miles a second) time would slow down significantly, from the perspective of someone who is not moving. Unfortunately we do not know how to build rockets that fast!
Wednesday, 22 August 2012
Milky Way Now Has a Twin (or Two): Astronomers Find First Group of Galaxies Just Like Ours
Research presented Aug. 23, 2012 at the International Astronomical Union General Assembly in Beijing has found the first group of galaxies that is just like ours, a rare sight in the local Universe.
The Milky Way is a fairly typical galaxy on its own, but when paired with its close neighbours -- the Magellanic Clouds -- it is very rare, and could have been one of a kind, until a survey of our local Universe found another two examples just like us. Astronomer Dr Aaron Robotham, jointly from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR) and the University of St Andrews in Scotland, searched for groups of galaxies similar to ours in the mostdetailed map of the local Universe yet, the Galaxy and Mass Assembly survey (GAMA).
The Milky Way is a fairly typical galaxy on its own, but when paired with its close neighbours -- the Magellanic Clouds -- it is very rare, and could have been one of a kind, until a survey of our local Universe found another two examples just like us. Astronomer Dr Aaron Robotham, jointly from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR) and the University of St Andrews in Scotland, searched for groups of galaxies similar to ours in the mostdetailed map of the local Universe yet, the Galaxy and Mass Assembly survey (GAMA).The Galaxy and Mass Assembly (GAMA) survey is an international collaboration led from ICRAR and the Australian Astronomical Observatory to map our local Universe in closer detail.
ICRAR is a joint venture between Curtin University and The University of Western Australia providing research excellence in the field of radio astronomy.
The Milky Way is a fairly typical galaxy on its own, but when paired with its close neighbours -- the Magellanic Clouds -- it is very rare, and could have been one of a kind, until a survey of our local Universe found another two examples just like us. Astronomer Dr Aaron Robotham, jointly from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR) and the University of St Andrews in Scotland, searched for groups of galaxies similar to ours in the mostdetailed map of the local Universe yet, the Galaxy and Mass Assembly survey (GAMA).
The Milky Way is a fairly typical galaxy on its own, but when paired with its close neighbours -- the Magellanic Clouds -- it is very rare, and could have been one of a kind, until a survey of our local Universe found another two examples just like us. Astronomer Dr Aaron Robotham, jointly from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR) and the University of St Andrews in Scotland, searched for groups of galaxies similar to ours in the mostdetailed map of the local Universe yet, the Galaxy and Mass Assembly survey (GAMA).The Galaxy and Mass Assembly (GAMA) survey is an international collaboration led from ICRAR and the Australian Astronomical Observatory to map our local Universe in closer detail.
ICRAR is a joint venture between Curtin University and The University of Western Australia providing research excellence in the field of radio astronomy.
Tuesday, 21 August 2012
Sun's Plasma Loops Recreated in the Lab to HelpUnderstand Solar Physics
In orbit around Earth is a wide range of satellites that we rely on for everything from television and radio feeds to GPS navigation. Although these spacecraft soar high above storms on Earth, they are still vulnerable to weather -- only it's weather from the sun. Large solar flares -- or plasma that erupts from the sun's surface -- can cause widespread damage, both in space and on Earth, which is why researchers at the California Institute of Technology (Caltech) are working to learn more about the possible precursors to solar flares called plasma loops. Now, they have recreated these loops in the lab.
Funding for the research outlined in the Physical Review Letters paper, "Magnetically Driven Flows in Arched Plasma Structures," came from the National Science Foundation, the U.S. Department ofEnergy, and the Air Force Office of Scientific Research.
Subscribe to:
Posts (Atom)