Issue 000 Solutions

Many thanks to all readers who submitted answers to this issue’s puzzles!

 


The Hardest Math-24 Problem

Nadia Poulteney, Seth Kahn, and Daniel Gorrie all submitted the correct combination of operations to get 24:

8 / (3 – 8 / 3) = 24


Earth Traveler’s Puzzle

Seth Kahn provided a very well worded solution:

Two answers (the second of which spans an infinite circular region) come to mind. The first and easiest is the North Pole. You will travel 100 miles south, travel 100 miles east (and remaining equidistant from the North Pole), and then travel 100 miles north again, ending up at the North Pole. The second, and more difficult one, is near the South Pole. Specifically, ANY point that is 100 miles north of the meridian near the South Pole that has a circumference of 100 miles. You will travel 100 miles south, and then be on that meridian. That meridian has a circumference of 100 miles, so once you travel 100 miles east, you will have made a complete circle and end up 100 miles south of your starting position. Finally, traveling 100 miles north will put you back where you began.


Detecting Randomness

Solution via http://www.analyticbridge.com/

It is very clear that chart #3 exhibits a strong clustering pattern, unless you define your problem as points randomly distributed in an unknown domain whose boundary has to be estimated. So, the big question is: between chart #1 and #2, which one represents randomness? … Note that all three charts contain the same number of points – so there’s no scaling issue involved here.

Let’s assume that we are dealing with a spatial distrubtion of points over the entire 2-dimentional space, and that observations are seen through a small square window. For instance, points (observations) could be stars as seen on a picture taken from a telescope.

The first issue is the fact that the data is censored: if you look at the distribution of nearest neighbor distances to draw conclusions, you must take into accont the fact that points near the boundary have fewer neighbors because some neighbors are outside the boundary. You can eliminate the bias by

1. Tiling the observation window to produce a mathematical tessellation

2. Mapping the square observation window onto the surface of a torus

3. Apply statistical bias-correction techniques

4. Use Monte-Carlo simulations to estimate what the true distribution is (with confidence intervals) if the data was truly random

Second issue: you need to use better visualization tools to see the patterns. The fact that … a + [is used] rather than a dot symbol to represents the points, helps: some points are so close to each other that if you represent points with dots, you won’t visually see the double points (in our example, double points could correspond to double star systems – and these very small-scale point interactions are part of what makes the distribution non-random in two of our charts). But you can do much better: you could measure a number of metric (averages, standard deviations, correlation between x and y, number of points in each sub-square, density estimates, etc.) and identify metrics proving that we are not dealing with pure randomness.

In these 3 charts, the standard deviation for either x or y – in case of pure randomness – should be 0.290 plus or minus 0.005. Only one of the 3 charts succeeds with this randomness test.

Third issue: even if multiple statistical tests suggests that the data is truly random, it does not mean it really is. For instance, all three charts show zero correlation between x and y, and have mean x and y close to 0.50 (a requirement to qualify as random distribution in this case). However only one chart exhibits randomness.

Fourth issue: we need a mathematical framework to define and check randomness. True randomness is the realization of a Poisson stochastic process, and we need to use metrics that uniquely characterizes a Poisson process to check whether a point distribution is truly random or not. Such metrics could be e.g.

a. The inter-point distance distributions

b. Number of observations in sub-squares (these counts should be uniformly distributed over the sub-squares, and a Chi-square test could provide the answer; however in our charts, we don’t have enough points in each sub-square to provide a valid test result)

Fifth issue: some of the great metrics (distances between kth-neighbors) might not have a simple mathematical formula. But we can use Monte-Carlo simulations to address this issue: simulate a random process, compute the distribution of distances (with confidence intervals) based on thousands of simulations, and compare with distances computed on your data. If distance distribution computed on the data set matches results from simulations, we are good, it means our data is probably random. However, we would have to make sure that distance distribution uniquely characterizes a Poisson process, and that no non-random processes could yield the same distance distribution. This exercise is known as goodness-of-fit testing: you try to see if your data support a specific hypothesis of randomness.

Sixth issue: if you have a million points (and in high dimensions, you need much more than a million points due to the curse of dimension), then you have a trillion distances to compute. No computer, not even in the cloud, will be able to make all these computations in less than a thousand year. So you need to pick up 10,000 points randomly, compute distances, and compare with equivalent computations based on simulated data. You need to make 1,000 simulations to get confidence intervals, but this is feasible.

Here’s how the data (charts 1-3) was created:

  • Chart 2: r=0.5, s=0.5
  • Chart 3: r=0.4, s=0.1

Notes

  • The only chart exhibiting randomness is chart #1. Chart #2 has significantly too low standard deviations for x and y, too few points near boundaries, and too many points that are very close to each other
  • Note that chart #1 (the random distribution) exhibits a little bit of clustering, as well as some point alignments: this is however perfectly expected from a random distribution. If the number of points in each sub-square was identical, the distribution would not be random, but would correspond to a situation where antagonist forces make points to stay as far away as possible from each other.
  • How would you test randomness if you had only two points (impossible to test), three points, or just 10 points?
  • Finally, once a pattern is detected (e.g. abnormal close proximity between neighboring points), it should be interpreted and/or leveraged, that is, it should lead to (say) ROI-positive trading rules if the framework is about stock trading, or the conclusion that double stars do exist (based on chart #2) if the framework is astronomy

Bank Branches

Solution via http://alexbraunstein.com/:

This question establishes familiarity with a wide range of basic stat concepts: mean, variance, waiting times, central limit theorem, and the ability to model and then analyze a real world situation. Both options have the same mean wait time. The latter option has smaller variance, because you are averaging the wait times of 100 individuals before you rather than 10. One can fairly argue about utility functions and the merits of risk seeking behavior over risk averse behavior, but I’d go for same mean with smaller variance (think about how maddening it is when another line at the grocery store is faster than your own).


Thumbnail Image Provided by Maraya Rodostianos via Flickr

 

Sunjay Bhatia on linkedinSunjay Bhatia on instagramSunjay Bhatia on githubSunjay Bhatia on facebookSunjay Bhatia on email2
Sunjay Bhatia
Lead Editor & Web Developer.
Sunjay Bhatia is a Junior at Tufts majoring in Computer Science and minoring in Philosophy.

Leave a Reply

Your email address will not be published. Required fields are marked *