Quantitative research Interview Questions | Glassdoor

# Quantitative research Interview Questions

474

quantitative research interview questions shared by candidates

## Top Interview Questions

Sort: RelevancePopular Date

### Quantitative Researcher Summer Intern at Jane Street was asked...

Apr 17, 2011
 3) Poker. 26 red, 26 black. Take one every time, you can choose to guess whether it’s red. You have only one chance. If you are right, you get 1 dollar. What’s the strategy? And what’s the expected earn?13 Answersexpected earn is 25 cents. 1/2*1/2*1, prob of choosing to guess is 1/2, prob of guessing right is 1/2, and the pay is \$1I would start picking cards without making a decision to reduce the sample size. This is risky because I could just as easily reduce my chances of selecting red by taking more red cards to start, as I could increase my chances of selecting red by picking more black cards first. But I like my chances with 52 cards, that at some point, I will at least get back to 50% if I start off by picking red. Ultimately, I can keep picking cards until there is only 1 red left. But I obviously wouldn't want to find myself in that situation so I would do my best to avoid it, by making a decision earlier rather than later. Best case scenario, I pick more blacks out of the deck right off the bat. My strategy would be to first pick 3 cards without making a decision. If I start off by selecting more than 1 red, and thus the probability of guessing red correctly is below 50%, then I will look to make a decision once I get back to the 50% mark. (The risk here is that I never get back to 50%) However, if I pick more than 1 black card, then I will continue to pick cards without making a choice until I reach 51% - ultimately hoping that I get down to a much smaller sample size, and variance is reduced, while odds are in my favor that I choose correctly. The expected return, in my opinion, all depends on "when" you decide to guess. If you decide to guess when there is a 50% chance of selecting correctly, then your expected return is 50 cents (50% correct wins you \$1 ; 50% incorrect wins \$0 --- 0.5 + 0 = .5) If you decide to guess when there is a 51% chance of selecting red correctly, then the expected return adjusts to (0.51* \$1) + (0.49 * \$0) = 51 cents. So, in other words, your expected return would be a direct function of the percentage probability of selecting correctly. i.e. 50% = 50 cents, 51% = 51 cents, 75% equals 75 cents. Thoughts?There is symmetry between red and black. Each time you pull a card it is equally likely to be red or black (assuming you haven't looked at the previous cards you pulled). Thus no matter when you guess you odds are 50% and the expected return should be 50 cents.Show More Responsesscheme: guess when the first one is black, p(guess) x p(right) x 1=1/2 x 26/51=13/510.5, just turn the first card to see if it's red. I think it's more about trading psychology. If you don't know where the price is going, just get out of the market asap. Don't expect anything.The problem should be random draw card and dont put it back. Every draw you have one chance to guess. So the strategy is after first draw you random guess it's red. If correct you get one dollar, next draw you know there is less red than black. So you guess black on next draw. Else if first guess you are wrong, you guess red on next round. It's all about conditioning on the information you know from the previous drawingsThis should be similar to the brainteaser about "picking an optimal place in the queue; if you are the first person whose birthday is the same as anyone in front of you, you win a free ticket." So in this case we want to find n such that the probability P(first n cards are black)*P(n+1th card is red | first n cards are black) is maximized, and call the n+1th card?The problem statement is not very clear. What I understand is: you take one card at a time, you can choose to guess, or you can look at it. If you guess, then if it's red, you gain \$1. And whatever the result, after the guess, game over. The answer is then \$0.5, and under whatever strategy you use. Suppose there is x red y black, if you guess, your chance of winning is x/(x+y). If you don't, and look at the card, and flip the next one, your chance of winning is x/(x+y)*(x-1)/(x+y-1) + y/(x+y)*x/(x+y-1) = x/(x+y), which is the same. A rigorous proof should obviously done by induction and start from x,y=0,1.The answer above is not 100% correct, for second scenario, if you don't guess, and only look, the total probability of getting red is indeed the same. However, the fact that you look at the card means you know if the probability of getting red is x/(x+y)*(x-1)/(x+y-1) or y/(x+y)*x/(x+y-1). Therefore, this argument only holds if you don't get to look at the card, or have any knowledge of what card you passedDoesn't matter what strategy you use. The probability is 1/2. It's a consequence of the Optional Stopping Theorem. The percent of cards that are left in the deck at each time is a martingale. Choosing when to stop and guess red is a stopping time. The expected value of a martingale at a stopping time is equal to the initial value, which is 1/2.My strategy was to always pick that colour, which has been taken less time during the previous picks. Naturally, that colour has a higher probability, because there are more still in the deck. In the model, n = the number of cards which has already been chosen, k = the number of black cards out of n, and m = min(k, n-k) i.e. the number of black cards out of n, if less black cards have been taken and the number of red cards out n if red cards have been taken less times. After n takes, we can face n+1 different situations, i.e. k = 0,1,2, ..., n. To calculate the expected value of the whole game we are interested in the probability that we face the given situation which can be computed with combination and the probability of winning the next pick. Every situation has the probability (n over m)/2^n, since every outcome can happen in (n over m) ways, and the number of all of the possible outcomes is 2^n. Then in that given situation the probability of winning is (26-m)/(52-n), because there are 26-m cards of the chosen colour in the deck which has 52-n cards in it. So combining them [(n over m)/2^n]*[(26-m)/(52-n)]. Then we have to sum over k from 0 to n, and then sum again over n from 0 to 51. (After the 52. pick we don't have to choose therefore we only sum to 51) I hope it's not that messy without proper math signs. After all, this is a bit too much of computation, so I wrote it quickly in Python and got 37.2856419726 which is a significant improvement compared to a basic strategy when you always choose the same colour.dynamic programming, let E(R,B) means the expected gain for R red and B blue remain, and the strategy will be guess whichever is more in the rest. E(0,B)=B for all Bs, E(R,0)=R for all Rs. E(R,B)=[max(R,B)+R*E(R-1,B)+B*E(R,B-1)]/(R+B). I don't know how to estimate E(26,26) quickly.The question, to me, is not clear. Perhaps on purpose. If so, the best answers would involve asking for clarification.

Jan 9, 2014

### Quantitative Researcher at Citadel was asked...

Nov 18, 2015
 Given log X ~ N(0,1). Compute the expectation of X.13 AnswersThis is a basic probability question.Exp[1/4]exp(mu + (sigma^2)/2) = exp(0+1/2) = exp(1/2)Show More ResponsesLet Y = log(X), then X = exp(Y) = r(Y), if we call the pdf of X f(X), then E[X] = integral(Xf(X)dX). By variable transformation, f(x) = g(r^-1(X))r^-1(X))', plug this into E[X] = integral(Xf(X)dX), we get integral( f(y)dy ), which equals to 1Suppose the density function of Y is P(y) and the one for X is F(x), it obeys that P(y)*dy = F(x)*dx; then the expectation of X is E(x) = Integral( x*F(x)*dx ) = Integral( Exp(y) * P(y) * dy ); if you plug the gaussian function and standard deviation in, you will find E(x) = Integral( Exp(1/2) * P(y-1/2)*d(y-1/2) ) = Exp(1/2) So, mojo's ans is correct.I m not that sure, as I got E(x) = 4 I substituted log X = y e^y = X ;and e^2y = t and plz do not forget to change the integration limitsDo they care if you explain the theory or not? I just looked at it, it's standard normal, therefore x=50%P(logX P(XSorry misread the problem. ignore.X has a log-normal distribution, so yes the mean is exp(mu+sigma^2/2)=exp(1/2)Expanding on the correct answers above: E[X] = E[exp(logX)], and logX is normally distributed. So: E[X} is the moment-generating-function (mgf) of a standard normal distribution, evaluated at 1. The mgf of a normal distribution with mean mu, SD sigma is exp(mu*t + (1/2) * sigma^2 * t^2), now set mu = 0, sigma = 1, t = 1 to get exp(1/2).Complete the square in the integral One or more comments have been removed. Please see our Community Guidelines or Terms of Service for more information.

### Quantitative Researcher at Jane Street was asked...

May 22, 2013
 If X, Y and Z are three random variables such that X and Y have a correlation of 0.9, and Y and Z have correlation of 0.8, what are the minimum and maximum correlation that X and Z can have?8 Answers0.9http://www.johndcook.com/blog/2010/06/17/covariance-and-law-of-cosines/0.98 & 0.46Show More Responseshttp://wolfr.am/1i1XT4PHow'd you get 0.98 and 0.46?NND correlation matrix --> det(\Sigma)>=0 --> 0.98 and 0.46minimum: 0.9*0.8+sqrt(1-0.9^2)*sqrt(1-0.8^2) = 0.9815 maximum: 0.9*0.8-sqrt(1-0.9^2)*sqrt(1-0.8^2) = 0.4585How do you know this

### Quantitative Researcher at Jane Street was asked...

Nov 11, 2015
 You have two decks of cards: a 52 card deck (26 black, 26 red) and a 26 card deck (13 black, 13 red). You randomly draw two cards and win if both are the same color. Which deck would you prefer? What if the 26 card deck was randomly drawn from the 52 card deck? Which deck would you prefer then?11 AnswersI responded immediately to the first part. The second part took me a bit longer - I immediately said that my intuition thought the third deck and the first deck were equally good but couldn't give a good rigorous proof very quickly (took about 30 seconds or so).Actually I think the third deck is better than the first deck. That is because it says to "draw two cards of the same color" not "draw two black cards". Compare the following decks: a deck with 13 black and 13 red, a deck with 26 black, and a deck with 26 red. The chance of drawing two of the same color cards are 6/25, 1, 1 respectively. You can see with a little math that any distribution of 26 cards is better than or equally as good as a distribution of 13 red and 13 black cards.Show More Responses@curious_cat I think that only implies that the third deck is better than the second deck (the second has 13/13 while the first has 26/26).1) P(win | 52-card deck) = 25/51. P(win | even 26-card deck) = 12/25. 52-card deck is better. 2) P(win | n-red cards in random 26-card deck) = (n/26 * ((n-1) / 25)) + ((26-n) / 26 * ((26-n-1) / 25)) = (n^2) / 325 - (2n / 25) + 1. Taking the derivative and solving for the root: P' = 2n / 325 - 2 / 25 = 0 -> n = 13, which is a minimum. Interpretation: having equal numbers of red and black cards in the deck MINIMIZES your chances of winning. Because the last deck is the same as the second deck (26 cards, split evenly red/black) except it may have an uneven number, this last (randomly selected) deck is better than the evely-split deck, but is it better than the 52-card deck? For this, we use the Hypergeometric Distribution (like the Binomial distribution, but for trials without replacement) to look at the odds of getting a 26-card deck with n red cards: P(selecting n red cards for random 26-card deck) = [ (52-26) C (26-n) ] * [ 26 C n ] / [52 C 26] = (2^43 * 3^17 * 5^12 * 7^4 * 11^4 * 13^4 * 17 * 19^2 * 23^2) / (29 * 31 * 37 * 41 * 43 * 47 * (n!)^2 * ((26-n)!)^2). From here, all that's left to do is combine these probabilities with the probability of winning [from above, P(win | n-red cards in deck) = (n^2) / 325 - (2n / 25) + 1] with each deck that contains 0 through 26 red cards (n => {0,26}). If this is larger than 25/51, then we can say definitively that we would prefer the randomly selected 26-card deck to the even 52-card deck. However, doing this out reveals that the probability of winning with the randomly selected deck = 25/51. Therefore, odds of winning are THE SAME with either the first (even 52-card deck) or the last (26-card deck, randomly selected from an even 52-card deck). Imagine, all that math to prove a simple equality! :) Q.E.D.the 3rd deck is the same as the 1st deck we do not need to calculate it by hand P(I randomly pick 2 cards in a 52 deck) = P(I always pick 2 cards on the top of the 52 cards’ deck) = P(You shuffle the deck, then I pick 2 cards on the top) = P(You shuffle the deck, you throw away the bottom half deck, then I pick 2 cards on the top) = P(Picking a 26 cards’ random deck, then I pick 2 cards on the top) = P(Picking a 26 cards’ random deck, then I randomly choose 2 cards in the 26 cards deck)in this logic - even if you only pick a 4 cards' deck randomly from the 52 cards deck for me to choose 2 cards - it's the same probability as if I choose 2 randomly from the 52 cards' deck directly .The 3rd deck is better. Suppose the 3rd deck has k red cards. The probability of getting 2 cards of the same colour is (C(k,2) + C(26-k,2))/C(26,2). It is easy to see that this is minimum for k = 13, which is the first deck. So essentially any random 26 cards is at-least as good as a 13-13 split.Split a blind draw into two draws doesnt change your distribution.These answers are all overkill, the answers are obvious by intuition which are good enough (perhaps even better) for an interview. 1. Obviously deck 1 is better , because taking away your first card has a smaller impact on the ratio of cards left of the same colour. 2. Obviously they're the same. Deck 1 is equivalent to shuffling a deck and taking the top 2 cards, Deck 3 is equivalent to shuffling a deck, taking the top 13 cards of that and then taking the top 2 cards of that. One or more comments have been removed. Please see our Community Guidelines or Terms of Service for more information.

### Quantitative Research at Susquehanna International Group (SIG) was asked...

Mar 17, 2014
 Drawing a pair of (x, y) from a joint Gaussian distribution with 0 covariance. Knowing the stndard deviations of x and y and knowing z = x + y, what is your best guess for x?10 AnswersZ sigma_x^2/ (sigma_x^2 + sigma_y^2). This is because X and Z are jointly normal and their covariance is equal to the variance of x. Therefore, the correlation coefficient is equal to sigma_x/sigma_z, and as E(X|Z)= rho. (sigma_x/sigma_z). Z, replacing the fact that the variance of the sum is the sum of the variance for independent (normal) R.V.s will give us the answer!z/2. Think about the 2 dimensional graph of joint density of (x, y). The condition x+y = z (here z is fixed) is a vertical plane. The intersection will be proportional to the conditional density. For any curve of such intersection, the highest point has x coordinate z/2.The answer is 0.5 EX - 0.5* EY + 0.5z (EX EY is not equal to zero)Show More ResponsesDc[ z * sigma_y^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] From signal processing point of view, x is the signal, y is the noise, and z is the observation. We know X has a prior distribution X ~ N(0, sigma_x^ 2 ), noise Y has distribution Y ~ N(0, sigma_y^ 2 ) and the value Z = z, the questions is what is the MMSE estimate of X given Z, i.e., E(X|Z)? Using Bayesian theorem, or Gauss Markov Theorem, one can show that : E(X|Z) = [ z * sigma_y^ (-2) + 0 * sigma_x^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] Comments: 1. This kind of problems are very common so please keep it in mind in Gaussian case the best estimate of X is a weighted linear combination of maximum likelihood estimate (z in this problem ) and the prior mean (0 in this problem). And the weights are the the inverse of variance. 2. In multi dimension cases where x, y, z are vectors, similar rules also apply. Check Gauss Markov Theorem 3. In tuition here is the larger variance of noise y, the less trust we will assign on ML estimate, which is sigma_y^ (-2) . Correspondingly, the more trust we put on the prior of X.[ z * sigma_y^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] From signal processing point of view, x is the signal, y is the noise, and z is the observation. We know X has a prior distribution X ~ N(0, sigma_x^ 2 ), noise Y has distribution Y ~ N(0, sigma_y^ 2 ) and the value Z = z, the questions is what is the MMSE estimate of X given Z, i.e., E(X|Z)? Using Bayesian theorem, or Gauss Markov Theorem, one can show that : E(X|Z) = [ z * sigma_y^ (-2) + 0 * sigma_x^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] Comments: 1. This kind of problems are very common so please keep it in mind in Gaussian case the best estimate of X is a weighted linear combination of maximum likelihood estimate (z in this problem ) and the prior mean (0 in this problem). And the weights are the the inverse of variance. 2. In multi dimension cases where x, y, z are vectors, similar rules also apply. Check Gauss Markov Theorem 3. In tuition here is the larger variance of noise y, the less trust we will assign on ML estimate, which is sigma_y^ (-2) . Correspondingly, the more trust we put on the prior of X.Z sigma_x^2/ (sigma_x^2 + sigma_y^2), similar to CAPMIf you don't know above theorems you can use good old bayes, P(X|Z) = P(Z|X)P(X)/P(Z) and set derivative=0, since you have pdfs of X and Z. But it's really messy and I don't wanna do it. One or more comments have been removed. Please see our Community Guidelines or Terms of Service for more information.

### Quantitative Researcher at Jane Street was asked...

Oct 24, 2014
 Interesting question: From a deck of 52 cards pick 26 at random. From this set of 26 you pick two cards. You win if the both of these cards are of the same color. Is this a game you would prefer over one in which you win by picking two (first two picks) of the same color at random from a deck of 26 with equal number of black and red cards8 Answersrandom is betterRandom is not better, both give equal win rates according to simulation.No difference, you can think of the first 26 cards in the shuffled deck as the randomly selected 26 cards, and then you pick the first two. So the winning probability will be exactly the same.Show More ResponsesFirst option is slightly better. One way to argue if the 26 random cards are even, then its the same as the 2nd situation, but if its uneven (12R 14B), then probability is 12/26 * 11/25 + 14/26 * 13/25 > the probability of the 2nd option. And the probability just gets better as the draw is more skewed. Another way to argue is the 1st option is picking from a deck of 52 even cards, and 2nd option is picking from 26. First option probability is 25/51, second option is 12/25. As u choose from more and more cards, the probability increases and tends towards 1/2.Intuitive Solution. 26 is an arbitrary selection. For a two card case: Case 2: deck of two cards. 1 black and 1 white card gives P = 0. Case 1: pick two cards from 52. P > 0.Random is better. Here is the solution: For latter game(fixed 26 cards with equal red and black), the probability to win is: p2=1-13*13/C(2,26)=0.48. (1-probability of picking two different color cards) For the random game, although the expectations of number of red card and black card are equal, but they may not be the exactly same. Assume R is the number of red cards, and B is the number of black cards. with constraints: R+B=26 then the probability to win for this game becomes: p1=1-R*B/C(2,26) With the constraints R+B=26, then R*B = p2 always.fix is better, since in random case more colors are mixed in. The prob of hitting same color pair got lowered. Keep in mind a deck of cards is composed of 4 colors and each of 13 cards. So that 12+14 is not happening.oops I got this wrong. random is better. 12+14 is happening. We are talking about colors...

### Quantitative Researcher at Jane Street was asked...

Dec 21, 2011
 A tosses n+1 coins. B tosses n coins. B wins if he has at least as many heads as A. What is the probability that B wins?7 AnswersSee the other guy's solution1/2 or 0.5Question rephrased: what's the prob that A has more heads? First n throws, they have equal number on the average. So, A gets a chance to have more heads on the last throw, 50% chance. -> P(A wins) = 1 - P(B wins) = 50% -> P(B wins) = 50%Show More Responses^ is a great way of thinking about it.Use symmetry. If both A and B had N coins, we can say that prob(A>B) = a, prob(AI don't think 50% can be right, that's already the chance for when they have an exactly equal amount.Let A, B be two binomial r.v.’s with means (n+1)/2 and n/2 respectively and variances (n+1)/4 and n/4 respectively. Let Z = B-A then we can derive E[Z] = -1/2 and Var[Z] is about n/2. By Chebyshev’s inequality, Pr[B wins] = Pr[Z>=0] = sigma(Z) sqrt (1/2n)]<= 1/(2n).

### Quantitative Researcher Summer Intern at Jane Street was asked...

Apr 17, 2011
 2) A. 10 ropes, each one has one red end and one blue end. Each time, take out a red and a blue end, make them together. Repeat 10 times. The expectation of the number of loops. B. 10 ropes, no color. All the other remains the same.7 Answers1/10 + 1/9 +...+ 1 ? B is similar..1/19+1/17+etc in BE[n] = 1/n + (n-1)/n*E[n-1] = 1/n + E[n-1] For the case of n=10, you would sum up all of the numbers from 1 to 10: 1/10+1/9+ 1/8 + 1/7 ... + 1/2Show More Responsesadd an extra 1 to the previous answerFor part A), the answer is 1+1/2+1/3+...+1/10. For part B), the answer is 1+1/3+1/5+...+1/19. Explanations: For part A), ctofmtcristo has the right approach but with a typo in the equation for E[n]. To obtain the expected number of loops, we note that the first red has a 1/n chance of connecting with its opposite blue end (and forming a loop) and a (n-1)/n chance of connecting with a different rope's blue end (and not yet forming a loop), so E[n] = 1/n*(1+E[n-1]) + (n-1)/n*E[n-1] = 1/n + E[n-1], with base case E[1]=1. Then, by induction, we get E[n] = 1+1/2+1/3+...+1/n. Part B) is similar. We note that the first end now has 2n-1 possible ends to connect to, of which 1 of them is its opposite end and 2n-2 of them belong to a different rope. Then, E[n] = 1/(2n-1)*(1+E[n-1]) + (2n-2)/(2n-1)*E[n-1] = 1/(2n-1) + E[n-1], with base case E(1)=1. By induction, E[n] = 1+1/3+1/5+...+1/(2n-1).Ed's anwser is not right. Just check for the case of 3 pairs. So total cases is 3!=6. 1 case with 3 loops, 2 cases with all wrongly attached, and 3 cases with 1 loop. so expected value is (3/6)*(1) + (1/6)*(3) = 6/6 = 1... and Eds anwser gives 1+1/2 +1/3 = 11/6, which is wrong clearly.Timi, you are missing the fact that if they are "all wrongly attached" then they form a loop. Similarly, the case you are thinking of "with 1 loop" actually has 2 loops. The correct answer is still 11/6.

### Quantitative Researcher at Jane Street was asked...

Nov 21, 2016
 Flip the coins, the expected time to get HHT6 AnswersLet x_i be the probability a random infinite binary sequence has its first HHT ending at position i. Clearly x_i =0 for i=1,2 and x_i=1/8 for i=3,4,5 and we can deduce the recurrence x_i=1/8(1-x_{i-3}-...-x_1) for i>=3. Since the sum of all x_i is 1 we can write x_i = 1/8(x_{i-2}+x_{i-1}+x_i+...) so x_3+x_4+x_5+... = 1/8 (x_1+2x_2+3x_3+...) so multiplying both sides by 8 we get that the expected number of flips is 8.This is incorrect. You cannot bracket like this. Condition on the first 2 steps and you will see that the expected number of steps to get HHT is 14.Both are wrong. The answer is 10. let S_{i} = roll that came up after flipping -ith coin. Sequence we need: HHT let X = # of rolls required to get HHT if S[1] = T => then we need to start from beginning, (1 + x) * 0.5 (as we have 1/2 Prob of seeing T) if S{1] = H => then move on and evaluate 2nd position if S[2] = H => we are basically at the same state, so you didn't waste any flips..just wait until T comes. So, (x) * 0.5 if S[2] = T => then you can proceed to next. if S[3] = H => then you reached the final spot; it took 3 steps to get here. if S[3] = T => then whole sequence breaks down, so you need to start from the beginning 0.5 * (x+3) So, the entire equation looks like: X = 0.5*(X+1) + 0.5 * { (0.5*X) + 0.5* { (0.5 * 3) + (0.5 * X)} } if you expand above, you get X = 10Show More ResponsesHere is another approach to this problem using conditional expectation. Let X be the minimum number of flips needed to get the first run of HHT. Let HH denote the event that the first two flips land on heads and similarly define TH, HT, TT. Then using a well known theorem, we can write E[X] as E[X] = E[X | HT] P(HT) + E[X | TT] P(TT) + E[X | HH] P(HH) + E[X | TH] P(TH) where E[X | Z] is the conditional expectation of random variable X with respect to random variable Z and P is the probability measure. It is easy to see that E[X | HT] = E[X | TT] = E[X] + 2, and P(HH) = P(HT) = P(TT) = P(HT) = 0.25. Using the above theorem for E[X | HH], E[X | TH] and a little observation, we obtain E[X | HH] = 4 and E[X | TH] = 0.5 * E[X] + 4 The first equation then implies E[X] = 8.8 is the correct answer...this can be solved by viewing it as a renewal reward process. Suppose we get a reward 1 each time we get HHT, then intuitively in the long run we can get 1/8 (=p(H)p(H)p(T)) unit of reward per unit time. By the central limit theorem for renewal reward process, this should be equal to the expectation of reward for each period over the expectation of hitting time. But the expected reward for each period is just 1, so the expectation of hitting time is 8. This can be generalized to HTH, which is more interesting, because the reward for each period is not 1 anymore: if we get a HTH from previous period, then if the start of the new period is TH, we actually get HTH TH, the reward here should be 2. So the expected reward in this case is 1+p(T)p(H) = 5/4, and thus the expected hitting time is 10.HHT = HH + T. After 2 consecutive heads (with E[HH] = 6), if we get H, we continue as we still have 2 consecutive H's, else we get desired HHT. So, E[T] = 2 and therefore, E[HHT] = E[HH+T] = E[HH] + E[T] = 6 + 2 = 8
110 of 474 Interview Questions