Quantitative research analyst Interview Questions
quantitative research analyst interview questions shared by candidates
Top Interview Questions
Quantitative Analyst at Google was asked...
You observe a sample of measurements coming from a fixed length ruler. If the object is shorter than the ruler you observe the actual measurement. Otherwise you observe the length of the ruler. What would be a good estimator of the ruler length? 17 AnswersMy initial answer was to use the MAX of the sample. That however is a biased estimator. How can you account for the bias and come up with an unbiased estimator? I think this is where you need to start making assumptions on the distribution. A uniform distribution would allow to estimate the bias. what kind of distribution is it? what do we do with the data? what precision should we get? what happens if we "lose" the oversize data? I came up a solution: if we know the distribution of actual measurement and the value of actual measurement, then the expected probability of getting wrong measurement should equal to the probability of actual measurement greater than length of ruler. Not sure this is correct and will interview on friday. good luck to me. Show More Responses If we now the distribution... I'd analyse If we now the distribution... I'd analyse the tail of cumulative density function. round(central tendency) * 2 Please ignore my previous two answers above. I misread the question and thought it was a regression problem, when it wasn't. get rid of measurements that are equal to the ruler length. then take the average of the rest of the measurements that are within the range (0, ruler_length), ruler_length is 2 times this average value Assuming that the measurements are on a continuous scale, you would have a lot of mass on the point exactly corresponding to the ruler's length, so you could use something akin to a mode I'd imagine. The mode should work, right? The length of the ruler is likely to be the only specific value that shows up more than once in the data. Should first ask whether we have some prior knowledge about the object length distribution Show More Responses L^{hat} = N/(N+1) * max(X1,X2,X3,...XN) is an unbiased estimator L^{hat} = 2*sum(X)/N is another unbiased estimator The length of the ruler would be the censored value in the data. If you draw the histogram of observed values, there should be a mass on the largest value, which is the length of the ruler. The more observation you have, the better the estimation. I think it should be (N+1)/N * max(X1,..XN). Is there anyone agreeing with me? One or more comments have been removed. |
How to measure 9 minutes using only a 4 minute and 7 minute hourglass 17 AnswersThe key is understanding that you will have to use the two hourglasses together. Since this problem could be asked in many ways using different values for the hourglasses and the total amount of time, it's more important to understand how you use the tools rather than memorize a specific example. The question is used to determine those who can apply their knowledge to solve problems vs. those who memorize answers "from the book". Start both timers. After four minutes, the four-minute timer will have expired and the seven-minute timer will have three minutes remaining. Flip the four minute timer over. After seven minutes, the seven-minute timer will have expired and the four-minute timer will still have one minute left. Flip the seven-minute timer over. After eight minutes, the four-minute timer will have expired for the second time. The seven-minute timer will have accumulated one minute after it's last flip. Flip over the seven-minute timer and when it expires nine minutes will have elapsed. For extra measure, you can always throw in something like, "assuming the timers can be flipped over nearly instantly..." Show More Responses I think you made this much more complicated then it needed to be. Just flip the 7 minute hour glass, when it is done, flip the 4 minute, when you see the four minute is half way done you have 9 minutes, 7 + 2 = 9. Much easier yes?! Start both timers together. When the 4 minute timer is done, flip it. 7 minute timer will have 3 minutes left. When the 7 minute timer is done, the 4 minute timer will have 1 minute left. Now you can count to 9 minutes by simply leaving the 4 minute to expire (1 min), flip it and let it expire (4 min), flip it again and let it expire (4 min). 1 + 4 + 4 = 9 By trying unsuccessfully to show how smart he is, T$ flunked the interview question. The task is to measure nine minutes, not to guess. ultraman: all measurement systems and techniques are have an inherent accuracy and precision (or repeatability and reproduceability), so the important thing is to understand the level of accuracy and precision required, and know which technique will meet the level required. The most important thing in business is not running off to do the task you've been asked with a tool you have handy, but understanding the reason for completing the task, and knowing how the output from your work will be used by others, and how the accuracy and precision will impact of others down the value chain. Dodging the question never works, rd. Tim is correct. Actually, I agree with #T$. Ultraman and d-bag, the goal is to get the job done the most efficient and accurate way. The original question says it has to get done within 9 minutes as well. So you turn the 7 min timer over. While the seven minute timer is going, you take a measurement of the length of the 4 min timer, with the width of your thumb. Whatever number of thumb widths it is, divide it by 2 and that will (very accurately) give you the half way point on the hourglass which will equal to 2 minutes. Once the seven minute hourglass is done, flip the 4 minute timer. Once the 4 minute timer hits the halfway point you marked, you accurately measured 9 minutes. 1st timer 2nd timer time count 4 7...................start both timers 3 6................. 1min 2 5..................2mins 1 4..................3mins 0(flip) 3..................4mins completed 4 3..................4mins(assuming flip takes no time ideally) 3 2..................5mins 2 1..................6mins 1 0(flip)..........7mins 1 7..................7mins(again ideal flip) 0 6..................8mins(flip 2nd timer to count 1min) 0(as it is) 7..................9mins... The gist of this problem is that the only way to get an exact time is by having an hourglass empty out completely when you declare 9 minutes are reached. So let's take a holistic approach: the most time we can count using ONE of the hourglasses is 8 minutes: by starting the 4-minute hourglass, flipping it, then letting it go for another 4-minutes. Let's say we start both the 4-minute and the 7-minute hourglass at the same time. The 7-minute hourglass runs out before 8-minutes are up, so we flip it. When 8-minutes are up (measured by the 4-minute hourglass), we know 1 minute worth of sand has deposited since the 7-minute hourglass was flipped. So now flip that sand upside and down and let it drain completely, and we've reached 9 minutes. Show More Responses Since this situation test our mental fortitude and problem solving skills, can we start by opening the lids and emptying the sand from the 4 min. Pour the sand from the 7 into the empty 4 so you are left with three. Poor out that three into a pile and fill the 7 min with the remains sand and repeat to get another three. So now you have two pile of three min sand for a total of 6. Fill up the 4 min and add the 2 pile of three min sand into the 7 min hour glass to have exactly 9 min. Set 4 minute timer to ring at teo minutes. Set seven minute timer. When it goes off , nine minutes will have passed. If you can not set the four monutes to go off after two minutes, set the sevem minute timer when the four minute is at the half way mark. Pls, i would like to get a solution to this question. If 4 boys move 12bags in 4 mins. How many boys will it take to move 36 bags in 8mins Put the 4 minute timer on its side. Once the sand has stopped flowing, start the 7 minute timer. This will be 9 minutes 😅 One or more comments have been removed. |
Quantitative Analyst at Jane Street was asked...
You have 2 decks of cards (each deck contains both red and black cards). One deck has twice the number of cards in the other deck with the same color ration (so one deck has 52 cards and the other has 104, both half red and half black). I offer you to play a game. First you get to chose which deck of cards you want to play with. Second, you draw 2 cards at random from your deck of choice. If both are red, then I will give you a ferarri. Which deck of cards would you chose? 13 AnswersThe unalert interviewee would answer "it doesn't matter, the probability is the same". While this is true for the first card, you have a higher probability of drawing a second red card with the big deck than the smaller one. So I chose the big deck (no homo) and I was right. yes, he would be right Mathematically: (26/52 * 25/51) vs (52/104 * 51/103) 51/206 > 25/102 Show More Responses Actually, the probabilities are the same for each deck. Consider than because you have a 50/50 chance of drawing your first card red, there's a 50% chance the numerator of the next fraction is reduced by one..so your probability is 26/52 * (25/51+26/51)/2 vs 52/104 * (51/103+52/103)/2 which are the same "G" did the right calculation. To calculate the probability of drawing two red cards in a row one needs to set up an equation where the first drawn was red, and the SECOND card was red as well. The question is asking what is the probability of drawing 2 red cards in a row, NOT what is the probability of drawing a red card then either a red or black card. Intuition tells us if you add in same number of red and black cards into the original problem with 52 cards, probability will go up. Just imagine when you add in 1 billion red cards and 1billion black cards You should definitely choose the larger deck if both are 50% red, 50% blue. Here's another explanation in addition to the other correct ones above. Each deck is naturally partitioned into maximal sub-stacks where each sub-stack consists of cards of a single color, either all red or all blue. If it is known ahead of time that half the cards are blue and half are red, then the expected size of the stack increases with the number of cards in the deck. Correction: expected size of the **sub-stacks** increases with the number of cards in the deck. Sorry about that. extreme case deck 1: 1 red 1 black deck 2: 2 red 2 black So more cards the better chance you get... 2 out of 52 is the equivalent of 4 out of 104. For these chances to be equal then the problem should be 2 out of 52 or 4 out of 104. If you still get to draw only two cards to try to get two reds then the chances should be better with the smaller deck. Either one. If you don't like how the table is set, re arrange it. I would offer the dealer half of the reward if he/she let me draw 5 more times in the deck with more cards. It seems that the odds are greater with a lager number of choices. 2:1 ratio. |
3) Poker. 26 red, 26 black. Take one every time, you can choose to guess whether it’s red. You have only one chance. If you are right, you get 1 dollar. What’s the strategy? And what’s the expected earn? 13 Answersexpected earn is 25 cents. 1/2*1/2*1, prob of choosing to guess is 1/2, prob of guessing right is 1/2, and the pay is $1 I would start picking cards without making a decision to reduce the sample size. This is risky because I could just as easily reduce my chances of selecting red by taking more red cards to start, as I could increase my chances of selecting red by picking more black cards first. But I like my chances with 52 cards, that at some point, I will at least get back to 50% if I start off by picking red. Ultimately, I can keep picking cards until there is only 1 red left. But I obviously wouldn't want to find myself in that situation so I would do my best to avoid it, by making a decision earlier rather than later. Best case scenario, I pick more blacks out of the deck right off the bat. My strategy would be to first pick 3 cards without making a decision. If I start off by selecting more than 1 red, and thus the probability of guessing red correctly is below 50%, then I will look to make a decision once I get back to the 50% mark. (The risk here is that I never get back to 50%) However, if I pick more than 1 black card, then I will continue to pick cards without making a choice until I reach 51% - ultimately hoping that I get down to a much smaller sample size, and variance is reduced, while odds are in my favor that I choose correctly. The expected return, in my opinion, all depends on "when" you decide to guess. If you decide to guess when there is a 50% chance of selecting correctly, then your expected return is 50 cents (50% correct wins you $1 ; 50% incorrect wins $0 --- 0.5 + 0 = .5) If you decide to guess when there is a 51% chance of selecting red correctly, then the expected return adjusts to (0.51* $1) + (0.49 * $0) = 51 cents. So, in other words, your expected return would be a direct function of the percentage probability of selecting correctly. i.e. 50% = 50 cents, 51% = 51 cents, 75% equals 75 cents. Thoughts? There is symmetry between red and black. Each time you pull a card it is equally likely to be red or black (assuming you haven't looked at the previous cards you pulled). Thus no matter when you guess you odds are 50% and the expected return should be 50 cents. Show More Responses scheme: guess when the first one is black, p(guess) x p(right) x 1=1/2 x 26/51=13/51 0.5, just turn the first card to see if it's red. I think it's more about trading psychology. If you don't know where the price is going, just get out of the market asap. Don't expect anything. The problem should be random draw card and dont put it back. Every draw you have one chance to guess. So the strategy is after first draw you random guess it's red. If correct you get one dollar, next draw you know there is less red than black. So you guess black on next draw. Else if first guess you are wrong, you guess red on next round. It's all about conditioning on the information you know from the previous drawings This should be similar to the brainteaser about "picking an optimal place in the queue; if you are the first person whose birthday is the same as anyone in front of you, you win a free ticket." So in this case we want to find n such that the probability P(first n cards are black)*P(n+1th card is red | first n cards are black) is maximized, and call the n+1th card? The problem statement is not very clear. What I understand is: you take one card at a time, you can choose to guess, or you can look at it. If you guess, then if it's red, you gain $1. And whatever the result, after the guess, game over. The answer is then $0.5, and under whatever strategy you use. Suppose there is x red y black, if you guess, your chance of winning is x/(x+y). If you don't, and look at the card, and flip the next one, your chance of winning is x/(x+y)*(x-1)/(x+y-1) + y/(x+y)*x/(x+y-1) = x/(x+y), which is the same. A rigorous proof should obviously done by induction and start from x,y=0,1. The answer above is not 100% correct, for second scenario, if you don't guess, and only look, the total probability of getting red is indeed the same. However, the fact that you look at the card means you know if the probability of getting red is x/(x+y)*(x-1)/(x+y-1) or y/(x+y)*x/(x+y-1). Therefore, this argument only holds if you don't get to look at the card, or have any knowledge of what card you passed Doesn't matter what strategy you use. The probability is 1/2. It's a consequence of the Optional Stopping Theorem. The percent of cards that are left in the deck at each time is a martingale. Choosing when to stop and guess red is a stopping time. The expected value of a martingale at a stopping time is equal to the initial value, which is 1/2. My strategy was to always pick that colour, which has been taken less time during the previous picks. Naturally, that colour has a higher probability, because there are more still in the deck. In the model, n = the number of cards which has already been chosen, k = the number of black cards out of n, and m = min(k, n-k) i.e. the number of black cards out of n, if less black cards have been taken and the number of red cards out n if red cards have been taken less times. After n takes, we can face n+1 different situations, i.e. k = 0,1,2, ..., n. To calculate the expected value of the whole game we are interested in the probability that we face the given situation which can be computed with combination and the probability of winning the next pick. Every situation has the probability (n over m)/2^n, since every outcome can happen in (n over m) ways, and the number of all of the possible outcomes is 2^n. Then in that given situation the probability of winning is (26-m)/(52-n), because there are 26-m cards of the chosen colour in the deck which has 52-n cards in it. So combining them [(n over m)/2^n]*[(26-m)/(52-n)]. Then we have to sum over k from 0 to n, and then sum again over n from 0 to 51. (After the 52. pick we don't have to choose therefore we only sum to 51) I hope it's not that messy without proper math signs. After all, this is a bit too much of computation, so I wrote it quickly in Python and got 37.2856419726 which is a significant improvement compared to a basic strategy when you always choose the same colour. dynamic programming, let E(R,B) means the expected gain for R red and B blue remain, and the strategy will be guess whichever is more in the rest. E(0,B)=B for all Bs, E(R,0)=R for all Rs. E(R,B)=[max(R,B)+R*E(R-1,B)+B*E(R,B-1)]/(R+B). I don't know how to estimate E(26,26) quickly. The question, to me, is not clear. Perhaps on purpose. If so, the best answers would involve asking for clarification. |
The first question he gave me was not hard. 1. You call to someone's house and asked if they have two children. The answer happens to be yes. Then you ask if one of their children is a boy. The answer happens to be yes again. What's the probability that the second child is a boy? 2. (Much harder) You call to someone's house and asked if they have two children. The answer happens to be yes. Then you ask if one of their children's name a William. The answer happens to be yes again.(We assume William is a boy's name, and that it's possible that both children are Williams) What's the probability that the second child is a boy? 12 Answersthe second ans should be 1/3 Actually the asnwer is 3/4. Lets go through the conditional probabilities. If the first child is named William and the second is NOT: the prob. of the second child being a boy is 1/2. If the second child is named William and the first is NOT: the prob. of the second child being a boy is 1. If both children are named William: the prob. of the second child being a boy is 1. Now, assuming equal probs. of the first, second, or both children being named William, the total probability of the second child being a boy is (1/3)(1/2)+(1/3)(1)+(1/3)(1)= 5/6 ^^^^^^^^^^^^^ Correction... I meant to write the answer is 5/6 Show More Responses You're both incorrect. The first answer is 2/3 and the second answer is 4/5. Let B stand for boy (not named William), Bw be a boy named William, and G be a girl. 1. The sample space is: (B,B) (B,G) (G,B) It's easy to see that two of these cases have a boy as the second child so the probability is 2/3. 2. The sample space is: (Bw, Bw) (Bw, G) (G, Bw) (Bw, B) (B, Bw) 4 of the 5 cases have a boy as the second child, and therefore the probability is 4/5. The first answer is 2/3, as mentioned. But the second question hasn't been answered correctly here. Given a person X, define e = P[X's name = William | X = boy]. Then, note that P[X's name = William] = P[X's name = William | X = boy] * P[X = boy] = e / 2 (the problem states that William is a boy's name). Letting C1 be child 1 and C2 be child 2, we are asked to find P[C2 = boy | Y], where for notational simplicity, Y denotes the event "C1's name = William or C2's name = William." By Bayes' rule, we have: P[C2 = boy | Y] = P[C2 = boy, Y] / P[Y] == Denominator: P[Y] = 1 - P[C1's name != William AND C2's name != William] = 1 - e/2 * e/2 = 1 - e^2/4. == == Numerator: P[C2 = boy, Y] = P[Y | C2 = boy] * P[C2 = boy] = 1/2 * P[Y | C2 = boy] = 1/2 * P[C1's name = William or C2's name = William | C2 = boy]. To compute P[C1's name = William or C2's name = William | C2 = boy], there are 3 cases. Case 1: C1's name is William, C2's name isn't William (given C2 is a boy). This is e/2 * (1- e). Case 2: C2's name is William, C1's name isn't William (given C2 is a boy). This is e * (1 - e/2). Case 3: both names are William (given C2 is a boy). This is e/2 * e. Summing these 3 cases gives e/2 - e^2/2 + e - e^2/2 + e^2 / 2 = 3e/2 - e^2/2, which is the numerator. == Dividing, we have (3e/2 - e^2/2) / (1 - e^2/4) = e * (3 - e) / (4 - e^2). As a sanity check, note that setting e = 1 implies that all boys are named William, and our probability is 2/3, as in the first question. Setting e = 0 implies that no boys are named William, in which case our probability is 0. unless William's prior distribution is provided. The only information we got is that William is a boy name. Thus the event is equal to at least one of them is a boy = 2/3. Becareful when you guys calculate probability with outcomes UNEQUAL LIKELY. 1. BB, BG, GB, GG 1/4 each, which later reduced to only BB, BG, GB with 1/3 probability each. So the probability of BB is 1/3 2. Let w is the probability of the name William. Probability to have at least one William in the family for BB is 2w-w^2, For BG - w, GB - w, GG - 0. So the probability of BB with at least one William is (2w-w^2)/(2w+2w-w^2) ~ 1/2 1. P[C2 = boy | C1 = boy ] = 0.5 since these are independent events 2. P[C2 = boy | C1 = boy and is called William ] = 0.5 since these are independent events The answer by Anonymous poster on Sep 28, 2014 gets closest to the answer. However, I think the calculation P[Y] = 1 - P[C1's name != William AND C2's name != William] should result in 1 - (1- e /2) ( 1- e / 2) = e - (e ^ 2 ) / 4, as opposed to poster's answer 1 - (e^2) / 4, which I think overstates the probability of Y. For e.g. let's assume that e (Probability [X is William | X is boy]) is 0.5, meaning half of all boys are named William. e - (e ^ 2) / 4 results in probability of P(Y) = 7/16; Y = C1 is William or C2 is William 1 - (e ^ 2) / 4 results in probability of P(Y) = 15/16, which is way too high; because there is more than one case possible in which we both C1 and C2 are not Williams, for e.g. if both are girls or both are boys but not named William etc) So in that case the final answer becomes: (3e/2 - (e^2)/2) * 0.5 / (e - (e ^ 2) / 4) = 3e - e^2 / 4e - e^2 = (3 - e) / (4 - e) One reason why I thought this might be incorrect was that setting e = 0, does not result in P(C2 = Boy | Y) as 0 like Anyoymous's poster does. However I think e = 0 is violates the question's assumptions. If e = 0, it means no boy is named William but question also says that William is a Boy's name. So that means there can be no person in the world named William, but then how did question come up with a person named William! I think second child refers the other child (the one not on the phone) In this case answer to first is 1/3 and second is (1-p)/(2-p) where p is total probability of the name William. For sanity check if all boys are named William the answers coincide. i think you guys are doing way too much and this is a trick question, these are completely independent events, i could name my child william, i could name it lollipop - the chances of it being a boy are still .5, regardless of its brothers name as well . (im going with .5 for both questions) keep in mind this is a quick phone interview question so they wont give anything thats too calculation heavy, involving e^2 exponents fractions etc, because you're supposed to be able to do it in your head Answer by indosaurabh is correct, i go to harvard |
Quantitative Researcher at Citadel was asked...
Given log X ~ N(0,1). Compute the expectation of X. 13 AnswersThis is a basic probability question. Exp[1/4] exp(mu + (sigma^2)/2) = exp(0+1/2) = exp(1/2) Show More Responses Let Y = log(X), then X = exp(Y) = r(Y), if we call the pdf of X f(X), then E[X] = integral(Xf(X)dX). By variable transformation, f(x) = g(r^-1(X))r^-1(X))', plug this into E[X] = integral(Xf(X)dX), we get integral( f(y)dy ), which equals to 1 Suppose the density function of Y is P(y) and the one for X is F(x), it obeys that P(y)*dy = F(x)*dx; then the expectation of X is E(x) = Integral( x*F(x)*dx ) = Integral( Exp(y) * P(y) * dy ); if you plug the gaussian function and standard deviation in, you will find E(x) = Integral( Exp(1/2) * P(y-1/2)*d(y-1/2) ) = Exp(1/2) So, mojo's ans is correct. I m not that sure, as I got E(x) = 4 I substituted log X = y e^y = X ;and e^2y = t and plz do not forget to change the integration limits Do they care if you explain the theory or not? I just looked at it, it's standard normal, therefore x=50% P(logX P(X Sorry misread the problem. ignore. X has a log-normal distribution, so yes the mean is exp(mu+sigma^2/2)=exp(1/2) Expanding on the correct answers above: E[X] = E[exp(logX)], and logX is normally distributed. So: E[X} is the moment-generating-function (mgf) of a standard normal distribution, evaluated at 1. The mgf of a normal distribution with mean mu, SD sigma is exp(mu*t + (1/2) * sigma^2 * t^2), now set mu = 0, sigma = 1, t = 1 to get exp(1/2). Complete the square in the integral One or more comments have been removed. |
Quantitative Trader at Jane Street was asked...
If we flip a coin 100 times, what is the probability of getting even number of heads? 13 Answers1/2. Let p_(e,n) be the probability that we have an even number of heads given n tosses of a coin. We have the recursion relation p_(e,n) = (prob to get heads on toss N AND we have odd number of heads after n-1 tosses) + (prob to get tails on toss N AND we have even number of heads after n-1 tosses) = 1/2 p(o, n-1) + 1/2 p(e,n-1). But p(o, n-1) = 1 - p(e, n-1), so p_(e,n) = 1/2 (1-p(e, n-1)) + 1/2 p_(e, n-1) = 1/2. 1/2. Whether even or odd heads is ultimately determined by the nth flip, for any n. Probability even to odd or vice versa is n. Show More Responses ( (2^50)+1) / (2^100) Even numbers include 0 heads to 100 heads. There exist 50% even complements and 50% odd complements from 1-100. So half our sample space. Now we do our zero case which is no heads and there is only 1 outcome. ( (2^99)+1) / (2^100) Even numbers include 0 heads to 100 heads. There exist 50% even complements and 50% odd complements from 1 heads to 100 heads.. So half our sample space. Now we do our zero case which is no heads and there is only 1 outcome. 1. 0-99 flips: 50% even; 50% odd 2. P(even after 100 flips) = P(odd after 99)*0.5 + P(even after 99)*0.5 = 0.5*0.5+0.5*0.5 = 0.5 all your answers are wrong. Just consider a game with 4 coins: Overall you have 2^4 = 16 differten combinations. The fisable combinations with 50/50 head tails are: 1100 1010 1001 0110 0101 0011 So just 6 out of 16 which gives us a prob of 3/8 The formula of Anonym tells us 1) (2^2 + 1)/(2^4) = 5/16 or 2) (2^3 + 1) /(2^4) = 9/16 and the solution of ky tells us P(odd after 3)*0.5 + P(even after 3)*0.5 but forgets about 111x. However, we can calculate the probability distribution of all tosses. Just model 0 = tail, 1 = head and look at the sum of all coin-tosses S. The law of large numbers tells us that the expression [S-E(Coin-toss)]/sqrt(100*0.25) approximately normal distributed. Thus, S ~ N(1/2,100*25) and consequently we see on average 1:1 head and tails. (Sum[i range from 0-50(inclusive)] 100 Choose 2i)*(1/2^100) Ans: 1/2 Lemma 1: For any n >= 1, sum_(i even, 0 = 1) coin tosses, P(getting even number of Heads) = 2^(n-1)/2^n = 1/2 Ans: 1/2 Lemma 1: For any n >= 1, sum_(i even, 0 = 1) coin tosses, P(getting even number of Heads) = 2^(n-1)/2^n = 1/2 Ans: 1/2 Lemma 1: For any n larger than 1, sum_(i even, i from 0 to n) n Choose i = sum_(j odd, j from 0 to n) n Choose j Proof of Lemma 1: For odd n, it is obviously true since "n Choose r = n Choose (n-r)". If n is odd and r is even, then (n-r) is odd, vice versa. Hence, with n odd, sum w.r.t. all even i is equal to sum w.r.t. all odd j. For even n, we may deduce that "sum_(i even, i from 0 to n) n Choose i = sum_(r from 0 to (n-1)) (n-1) Choose r = sum_(j odd, j from 0 to n) n Choose j" In other words, for n even, sum of (even number)-terms at the nth layer of the Pascal's Triangle is equal to sum of all terms at the (n-1)th layer of the Pascal's Triangle. For example, 1 3 3 1 3rd layer of Pascal's Triangle 1 4 6 4 1 4th layer of Pascal's Triangle 1+6+1 = 1+3+3+1 = 4+4 Since sum_(i even, i from 0 to n) n Choose i = sum_(j odd, j from 0 to n) n Choose j, and obviously sum_(i even, i from 0 to n) n Choose i + sum_(j odd, j from 0 to n) n Choose j = 2^n, we have sum_(i even, i from 0 to n) n Choose i = 2^(n-1) Therefore, in n (larger than or equal to 1) coin tosses, P(getting even number of Heads) = 2^(n-1)/2^n = 1/2 There is a bijection between the set of coin tosses with an even number of heads and those with an odd number of heads. (One can easily prove this by induction, being true for n=1, and the induction step is trivial because the even sets are carried over to the odd sets and vice versa.) Hence p=1/2. One or more comments have been removed. |
How many numbers between 1 and 1000 contain a 3? 13 Answers300 numbers contain a 3, but you counted numbers of the form x33, 3x3, and 33x *twice* so you must subtract them 300-30=270, but you subtracted 333 once too many, so add it back 300-30+1=271. Answer is 271. 271 does not seem right . I think it is 111. Aren't there just 11 numbers containing 3 between 0 an 100, like 3,13,23 30,33 etc. So between 0 and 1000 there are 111; after adding in for 300. Show More Responses Thanks Candidate, or more simply For 300 to 399 = 100 numbers For other x00 to x99 = 19 numbers each = 19 x 9 = 171 numbers Total of 271 numbers I would go with elimination of one character from a base 10 numbering system gives you a base 9 numbering system. 9^3 = 729 permutations of a base 9 numbering system (a system with no number 3) with 3 digits, since 10^3 = 1000; 10^3 - 9^3 = 271 thus 271 numbers have a 3 in them. 1000 does not contain a 3. So, count the number of 3 digit numbers without a three. There are 9 choices for the first entry, 9 for the second, and 9 for the third. So, there are 729 numbers without a 3, and 1000-729 = 271 with a 3. I have a much simpler and faster method: let A be the cardinality of numbers between 1 and 1000 that contain a 3 and let A' be the cardinality of numbers between 1 and 1000 that do not contain a 3. There are 3 digits that can take the form of (0,1,2,3,4,5,6,7,8,9), so 10 possibilities.. To obtain A' cardinality we have 9 possibilities because 3 is excluded so A' = 9^3 = 729. Hence, the amount of numbers that don't have a 3 from 1 to 1000 is 729 so to obtain the amount that does contain at least one 3 is : 1000 - 729 = 271 lol im in 7th grade and this question is easy to me. first you count the numbers which DON't have a three. there are 9 choices for the first digit, 9 for the second digit, and 9 for the third. You probably noticed that this counts 0 to 999 instead of 1 to 1000 but its okay because, we count the same amount of numbers. 9^3=729 -1000=|271| (i like using absolute value cause it makes me look cool, its just a way to show "difference" in math) 3xx 100 a3x a is not 3 here to reduce duplication, 10*9 ab3 a , b are neither not 3 , 9*9 =271 You can use this formula to work it out for any power of 10: Tn+1=9Tn+10^n. Tn being the number of threes in the numbers between 1 and the previous power of 10. Tn+1 is simply saying the number of threes in the next power (the one you are working out). The 10^n is the power of 10 that you add on. This is the previous power of ten. You must start off knowing that there is one 3 between 1 and 10 For 100: Tn+1=9*1+10^1=19 For 1000: Tn+1=9*19+10^2=271 For 10 000: Tn+1=9*271+10^3=3439 It is an infinite number 3.1, 3.11, 3.111, 3.11111 One or more comments have been removed. |
Quantitative Trader at Jane Street was asked...
Imagine I flip 100 coins in a sequence, and you need to guess the sequence. You are allowed to ask one yes/no question. What do you ask to maximize the probability of guessing the sequence? 9 AnswersAre there more heads than tails? Any question which provides useful information gives the same probability (1/2^99) of getting the sequence correct. At least, I don't have to guess every flip separately, since there is no difference between flips. So I'd better guess by one side, all heads or all tails. Then only one question is left. Which one has more probability, head or tail. So the first answer above is right. Show More Responses The highest information gain is when you split all possible sequences into two equal sized groups, so the simple question "is the first flip heads?" yields the best chance. Asking if there are more heads than tails is not correct since it does not split the possibilities into two equal groups (one group contains the 50/50 split sequences) this is really an open end question w/o asking any question our probability of success is 1/2^100 if we ask what a typical position in the sequence is , our success rate is 1/2^99 at this moment it's very hard to say if there's any other type of question that can give a higher success rate or not ? how can we prove there 's no other ways of conditioning (asking questions) that can give a higher success rate ? I was also asked this question..... Totally lost.. if you asked whether the 1st toss is head or tail, you are left with 2^99 cases afterward. if you asked whether there are more head than tail: if the person says yes, you are left with 2^99 - 100C50 cases, which means you would do better. If the person says no, you are left with 2^99 + 100C50 cases, which means you would do worse -> on average you would do the same to the question on position. Hence you can ask whatever question and you cannot do better than doubling your chance from 1/2^100 to 1/2^99. I think it's asking if there are more heads than tails. Once you know there are more heads than tails, you know you need at least 50 of them, and you have 100 choose 50 places to put them. Then you have 2^50 choices for the remaining coins, so your total number of possible combinations is (100 choose 50) + 2^50. If you only ask if the first toss is heads, then you have 2^99 possible combinations. Using wolfram alpha, 2^99 is bigger than (100 choose 50) + 2^50 so asking if there are more heads than tails is better. If I had to compute 2^99 and (100 choose 50) + 2^50 without using wolfram alpha I would be screwed. I also don't think a rough approximation would work since they're of the same order of magnitude. Any meaningful question will split the sample space into two non-empty parts. If the sample space has size X ( = 2^100 ), let's say the two parts have sizes A and X - A. Then the probability of guessing correctly is (1/A)*(A/X) + (1/(X-A))*((X-A)/X) = 2/X = 1/2^99. So asking about the value of the fist toss is as good as anything. |
Quantitative Researcher at Jane Street was asked...
If X, Y and Z are three random variables such that X and Y have a correlation of 0.9, and Y and Z have correlation of 0.8, what are the minimum and maximum correlation that X and Z can have? 8 Answers0.9 http://www.johndcook.com/blog/2010/06/17/covariance-and-law-of-cosines/ 0.98 & 0.46 Show More Responses http://wolfr.am/1i1XT4P How'd you get 0.98 and 0.46? NND correlation matrix --> det(\Sigma)>=0 --> 0.98 and 0.46 minimum: 0.9*0.8+sqrt(1-0.9^2)*sqrt(1-0.8^2) = 0.9815 maximum: 0.9*0.8-sqrt(1-0.9^2)*sqrt(1-0.8^2) = 0.4585 How do you know this |
See Interview Questions for Similar Jobs
- Analyst
- Software Engineer
- Data Scientist
- Associate
- Quantitative Research Analyst
- Quantitative Researcher
- Business Analyst
- Data Analyst
- Trader
- Intern
- Financial Analyst
- Vice President
- Software Developer
- Senior Software Engineer
- Consultant
- Investment Banking Analyst
- Summer Analyst
- Quantitative Associate
- Strategist