Quantitative Analyst was asked...January 22, 2014

↳

It is the OLS estimator (with Gauss-Markov approximations and normality), by Fisher's theorem on Maximum Likelihood Estimators. Less

↳

It didn't mention linear. if it's linear, then ols. if not, CEF, conditional expectation function. Less

↳

It didn't mention linear. if it's linear, then ols. if not, CEF, conditional expectation function. Less

Quantitative Trader was asked...December 2, 2010

↳

I loved this question and want to renew this debate. What do you guys think about my two approaches to solve it: 1) If we can only play this game once AND our goal is to maximize profit (as the question states). I agree with above that expected value of a coin is 50. Given that we bid 51 to win auction and pocket 24. Problem is we only win if coin is (0:50) which gives us new expected value of 25, and so we lose. We can deduct this way all the way to zero bid. 2) Nothing beats little Monte Carlo experiment. I created a matrix of 100X1000000. Where 100 is the number of possible bids given certain price. 1M is the number of random uniformly distributed prices 0-100. Calculated expected gain at each bid level 0 to 100. I wish I could post a MATLAB graph here. It looks as downward facing 1/2 of parabola with max value of 0 and min of -25. Results: best gain of 0 achieved at 0 bid, worst average gain of -25 is at 100 bid. Comments appreciated! Less

↳

i had the longest argument with a friend on this. you cannot get a positive expected value no matter what you bet. if you bid $50, then you can discount the EVs if the value of the coin is 51-100 since that'll be 0 (you don't win the auction). if you bid $50 and the coin's worth $50, you sell for $75make $25. but if the coin's worth $0 you lose $50. keep comparing the extremities and you will see in almost all cases you will be losing more than you make...that's the best i can explain it. i had to use a spreadsheet to prove this to my friend. in order to get an EV of 0, you'd need to change the multiplier to 2. which makes sense. if X is your bid, your profit is (X/2) *1.5 - X. Less

↳

The price is NOT necessarily uniformly distributed between 0 and 100, therefore 0 might not be the right answer Less

Quantitative Developer was asked...July 31, 2009

↳

Start both timers together. When the 4 minute timer is done, flip it. 7 minute timer will have 3 minutes left. When the 7 minute timer is done, the 4 minute timer will have 1 minute left. Now you can count to 9 minutes by simply leaving the 4 minute to expire (1 min), flip it and let it expire (4 min), flip it again and let it expire (4 min). 1 + 4 + 4 = 9 Less

↳

The key is understanding that you will have to use the two hourglasses together. Since this problem could be asked in many ways using different values for the hourglasses and the total amount of time, it's more important to understand how you use the tools rather than memorize a specific example. The question is used to determine those who can apply their knowledge to solve problems vs. those who memorize answers "from the book". Start both timers. After four minutes, the four-minute timer will have expired and the seven-minute timer will have three minutes remaining. Flip the four minute timer over. After seven minutes, the seven-minute timer will have expired and the four-minute timer will still have one minute left. Flip the seven-minute timer over. After eight minutes, the four-minute timer will have expired for the second time. The seven-minute timer will have accumulated one minute after it's last flip. Flip over the seven-minute timer and when it expires nine minutes will have elapsed. For extra measure, you can always throw in something like, "assuming the timers can be flipped over nearly instantly..." Less

↳

1st timer 2nd timer time count 4 7...................start both timers 3 6................. 1min 2 5..................2mins 1 4..................3mins 0(flip) 3..................4mins completed 4 3..................4mins(assuming flip takes no time ideally) 3 2..................5mins 2 1..................6mins 1 0(flip)..........7mins 1 7..................7mins(again ideal flip) 0 6..................8mins(flip 2nd timer to count 1min) 0(as it is) 7..................9mins... Less

Quantitative Analyst was asked...June 15, 2015

↳

get rid of measurements that are equal to the ruler length. then take the average of the rest of the measurements that are within the range (0, ruler_length), ruler_length is 2 times this average value Less

↳

Assuming that the measurements are on a continuous scale, you would have a lot of mass on the point exactly corresponding to the ruler's length, so you could use something akin to a mode I'd imagine. Less

↳

The mode should work, right? The length of the ruler is likely to be the only specific value that shows up more than once in the data. Less

Quantitative Trader was asked...October 17, 2010

↳

2/3

↳

Because you have 1/3 chance to get double head coin and you will surely get head, 1/3 chance to get single head coin and then 1/2 chance to get head. So the probability of choosing double head coin and get head is 1/3, while choosing single head coin and get head is 1/6. Then, given you get head after tossing, then chance that you chose double head coin is (1/3)/(1/3+1/6) = 2/3 Less

↳

2 heads on double headed coin, 1 head on the other, P(head is coming from double headed) = 2/3 Less

Quantitative Analyst was asked...October 14, 2014

↳

For question 1, we recall that a positive definite symmetric matrix is one with all eigenvalues greater than 0. Thus, one method would be to calculate the eigenvalues of this matrix. We call the matrix M. We let the nxn matrix A be the matrix with all a's. We then note that: M = A - (a-1)I, where I is the identity matrix. This is a matrix polynomial of A (call this polynomial p(x) = x-(a-1)). By the spectral mapping theorem, we note that the eigenvalues of M are p(t), where t is an eigenvalue of A. Thus, it suffices to find the eigenvalues of A. Since A clearly has rank n-1, the eigenvalue 0 appears with multiplicity n-1. One can see that another eigenvalue is na, or this can be noted by deriving the matrix's characteristic polynomial. The trace is na. All other coefficients are 0 because the eigenvalue 0 has rank n-1 (to see this, note that the characteristic polynomial is (x-t)x^(n-1), where t is the other eigenvalue. Solving for the root of this polynomial gives that an eigenvalue is na. It follows that the eigenvalues of M are 1-a and an-a+1. Because they must be positive, we get that -1/(n-1)1. Less

↳

for q2: My first solution was via induction, as in the solution above. However, that the answer is n_C_2 (read n choose 2) seemed like this problem should have a direct (i.e. w/o induction) solution. And here is one: Think of this problem as talking about the edge set in a complete graph K_n - note that there are n_C_2 edges. You partition the edge set into what are called bipartite edge sets: Given the vertex set V, break it up as V_1, V_2 (of sizes n_1, n_2). All of the edges between V_1 and V_2 form our first bipartite graph (with n_1 x n_2 edges). Repeat with V_1, separately with V_2. Thus, sum across the partitions of the corresponding n_1 x n_2 values = total # of edges in the complete graph to begin with = n_C_2. Less

↳

Question 2) seems incomplete. Do you mind showing the complete question? Thanks!

Quantitative Researcher Summer Intern was asked...April 18, 2011

↳

There is symmetry between red and black. Each time you pull a card it is equally likely to be red or black (assuming you haven't looked at the previous cards you pulled). Thus no matter when you guess you odds are 50% and the expected return should be 50 cents. Less

↳

The problem should be random draw card and dont put it back. Every draw you have one chance to guess. So the strategy is after first draw you random guess it's red. If correct you get one dollar, next draw you know there is less red than black. So you guess black on next draw. Else if first guess you are wrong, you guess red on next round. It's all about conditioning on the information you know from the previous drawings Less

↳

The problem statement is not very clear. What I understand is: you take one card at a time, you can choose to guess, or you can look at it. If you guess, then if it's red, you gain $1. And whatever the result, after the guess, game over. The answer is then $0.5, and under whatever strategy you use. Suppose there is x red y black, if you guess, your chance of winning is x/(x+y). If you don't, and look at the card, and flip the next one, your chance of winning is x/(x+y)*(x-1)/(x+y-1) + y/(x+y)*x/(x+y-1) = x/(x+y), which is the same. A rigorous proof should obviously done by induction and start from x,y=0,1. Less

Quantitative Trader was asked...January 2, 2010

↳

it is "Moser's circle problem" 1+nC2+nC4

↳

This is a famous example in mathematics; it's often used as a warning against naive generalization. Here are the answers for the first six natural numbers: (# points) : (# regions) 1 : 1 2 : 2 3 : 4 4 : 8 5 : 16 6 : 31 Yes, 31. You can see, e.g., Conway and Guy's "The Book of Numbers" for an account of this. Less

↳

mingda is correct

Software Engineering and Quantitative Research was asked...January 9, 2014

↳

1. BB, BG, GB, GG 1/4 each, which later reduced to only BB, BG, GB with 1/3 probability each. So the probability of BB is 1/3 2. Let w is the probability of the name William. Probability to have at least one William in the family for BB is 2w-w^2, For BG - w, GB - w, GG - 0. So the probability of BB with at least one William is (2w-w^2)/(2w+2w-w^2) ~ 1/2 Less

↳

The answer by Anonymous poster on Sep 28, 2014 gets closest to the answer. However, I think the calculation P[Y] = 1 - P[C1's name != William AND C2's name != William] should result in 1 - (1- e /2) ( 1- e / 2) = e - (e ^ 2 ) / 4, as opposed to poster's answer 1 - (e^2) / 4, which I think overstates the probability of Y. For e.g. let's assume that e (Probability [X is William | X is boy]) is 0.5, meaning half of all boys are named William. e - (e ^ 2) / 4 results in probability of P(Y) = 7/16; Y = C1 is William or C2 is William 1 - (e ^ 2) / 4 results in probability of P(Y) = 15/16, which is way too high; because there is more than one case possible in which we both C1 and C2 are not Williams, for e.g. if both are girls or both are boys but not named William etc) So in that case the final answer becomes: (3e/2 - (e^2)/2) * 0.5 / (e - (e ^ 2) / 4) = 3e - e^2 / 4e - e^2 = (3 - e) / (4 - e) One reason why I thought this might be incorrect was that setting e = 0, does not result in P(C2 = Boy | Y) as 0 like Anyoymous's poster does. However I think e = 0 is violates the question's assumptions. If e = 0, it means no boy is named William but question also says that William is a Boy's name. So that means there can be no person in the world named William, but then how did question come up with a person named William! Less

↳

I think second child refers the other child (the one not on the phone) In this case answer to first is 1/3 and second is (1-p)/(2-p) where p is total probability of the name William. For sanity check if all boys are named William the answers coincide. Less

Quantitative Trader was asked...October 17, 2010

↳

I think it is 7. Kannp, the top five are not established after 5 because the second in one race might be better than the first in another race. Have five races of five each and keep the top 3 in each races. Then, take each of the winners and race them against eachother. The two bottom and the 4 who lost to them are discarded. The two who lost to third place is discarded. And, the one who got 3rd in the race with the horse who gets second in the 6th race is discarded. Now, there are six horses left. Race all but the horse who won twice and keep the top two, combined with the horse who sat out. Now you are done in 7 races. Less

↳

He gave a very good explanation. I'll decline to explain why prm is incredibly wrong. Less

↳

7 times.