Intern Interview Questions | Glassdoor

# Intern Interview Questions

From retail to finance to medicine, every industry needs interns to provide additional support and assistance. Interview questions will vary greatly depending on the industry and role you are looking for. Expect to answer questions about how you work on teams and provide examples of any relevant work experience. To ace your interview, make sure to research the particular position you are applying for.

## Top Interview Questions

Sort: RelevancePopular Date

### Prop Trading Summer Intern at Jane Street was asked...

Mar 11, 2011
 You are playing a game where the player gets to draw the number 1-100 out of hat, replace and redraw as many times as they want, with their final number being how many dollars they win from the game. Each "redraw" costs an extra \$1. How much would you charge someone to play this game?10 Answers10?redraw 10 times and get the payoff around 77?the average draw will pay out \$50.50Show More ResponsesEvery time when you are deciding whether to play once more, we consider the two options: stop now, then you get current number (the cost of \$1 is sunk cost); continue, then the expectation of benefit would be \$50.50-1=49.50. This means, as long as you have get more than \$50 (inclusive), then you should stop the game. Suppose the game ends after N rounds (with probability (49%)^(N-1) x 51%, and in the last round, the expected number is (50+100)/2=75, and thus the expected net benefit would be 75-N . This shows N<=74. Then we take the sum: \$Sigma_{N=1}^{74} (49%)^(N-1) x 51% x (75-N), which is 73.Every time when you are deciding whether to play once more, we consider the two options: stop now, then you get current number (the cost of \$1 is sunk cost); continue, then the expectation of benefit would be \$50.50-1=49.50. This means, as long as you have get more than \$50 (inclusive), then you should stop the game. Suppose the game ends after N rounds (with probability (49%)^(N-1) x 51%, and in the last round, the expected number is (50+100)/2=75, and thus the expected net benefit would be 75-N . This shows N<=74. Then we take the sum: \$Sigma_{N=1}^{74} (49%)^(N-1) x 51% x (75-N), which is 73.All of the above answers are way off. For a correct answer, see Yuval Filmus' answer at StackExchange: http://math.stackexchange.com/questions/27524/fair-value-of-a-hat-drawing-gameYuval Filmus proves that the value of the game is 1209/14=86.37 and the strategy is to stay on 87 and throw again on 86 and below..Let x be the expected value and n be the smallest number you'll stop the game at. Set up equation with x and n, get x in terms of n, take derivative to find n that maximizes x, plug in the ceiling (because n must be integer) and find maximum of x. ceiling ends up being 87, x is 87.357, so charge \$87.36 or moreI guess the question asks for the expected value of the game given an optimal strategy. I suppose the strategy is to go on the next round if the draw is 50 or less. Hence, the expected value of each round is: (1) 1/2*1/50(51 + 52 + ... + 100) (2) 1/2*1/2*1/50(51 + 52 + ... + 100) - 1/2 (3) 1/2^3 * 1/50 (51 + 52 + ... + 100) - 1/4 .... Sum all these up to infinity, you'd get 74.50.This is all very interesting and I'm sure has some application...but to trading? I don't think so. I own a seat on the futures exchange and was one of the largest derivatives traders on the floor. Math skills and reasoning are important but not to this level. I would associate day trading/scalping more to race car driving i.e. getting a feel for what's going on during the day, the speed at which the market is moving and the tempo for the up and down moves. If I were the interviewer at one of these firms, I throw a baseball at your head and see if you were quick enough and smart enough to duck. Then if you picked it up and threw it at my head I'd know that you had the balls to trade. I know guys who can answer these questions, work at major banks, have a team of quants working for them and call me up to borrow money from me because they're not making money. At the end of the day, if you want to be a trader then...be a trader. If you want to be a mathematician then be a mathematician. It's cool to multiply a string of numbers in your head, I can do this also, but never in my trading career did I make money because in an instant I could multiply 87*34 or answer Mensa questions which...realistically the above answer is: it depends on the market as the market will dictate the price. You may want to charge \$87 to play that game but you'd have to be an idiot to play it. In trading terms this means that when AAPL is trading at \$700 everyone would love to buy it at \$400. Now that it's trading at \$400 everyone is afraid that it's going to \$0. Hope this helps. No offense to the math guys on this page, just want to set the trading record straight.

### Sales Strat Intern at Goldman Sachs was asked...

Mar 17, 2013
 Suppose we hire you, and you and the rest of the new interns decide to go buy a cup of coffee. Each intern purchases one cup of coffee. One of the interns suggests everyone play a game. Everyone will flip a fair coin, dividing the group of interns into two subgroups: those that got heads and those that got tails. The game is this: whichever group is smaller evenly splits the cost of everyone's cup of coffee (i.e. if there are 5 interns, 3 get H, 2 get T, then the two interns that got tails each buy 2.5 cups of coffee). However, nothing says you need to play this game. You can choose to buy your own cup of coffee and not play the game at all. The question: Should you play this game? (Note: You may assume that there is an odd number of interns, so there are no ties, and that if everyone gets H or everyone gets T, then everyone loses and just buys their own cup of coffee).18 AnswersHint: Despite its look, this is not a math question.I dont get it, Would you please provide more hints?Assume each coffee costs \$1, for simplicity. So this is effectively a choice between two outcomes: paying \$1 with probability 100%, or paying \$0 with some probability and paying more than \$0 with some probability. So you ask yourself: what is your expected cost in the second case? Give that a try and see if you can figure it out. However, I want to remind you that the question is "should you play this game?" The answer to this question isn't just a math question. If you only work out expected values, you've missed the point. For example, a separate question (with the same kind of flavor as the direction I'm trying to lead you) is this: suppose I give you a choice of two outcomes. Either you get \$1 with 100% probability or I give you \$500,000,000 with probability 1/100,000,000 and 0 otherwise. Which would you pick? Now what if it was the same first choice, but the second choice was \$50 with prob 1/10 and 0 otherwise? Now which would you pick? These are the kinds of things you want to think about while answering this kind of question. Let me know if you have anymore questions. And if you want me to post the answer, just let me know.Show More ResponsesI think I get it. It's about investor's risk appetite. Investors is likely to take guaranteed gains, here is \$1.Well yes and no. This is indeed a risk aversion question. If you work it out, you'll find that the EV in each case is exactly the same (your EV is -1 cup of coffee in both scenarios) but that's not the end of it. It's also really a question for them to test your risk aversion. You can really support either answer, and *should* comment on the validity of either answer. My answer was to go with buying my own cup of coffee, and followed it up with a story where a friend of mine had tried to get us to play credit card roulette (which is similar in spirit to this game) and I told him that I did, in fact, say no in that instance and why I said no. However, traditionally people are risk-averse when faced with gains and risk-seeking when faced with losses, so many would probably choose to play the game. But this is as much a question about your psychology as it is about your math skills. And that's why this is such an awesome question, and is probably a question that kills most people they ask it to.Would you please explain to me how do you get -1 for the second scenario? There are 50% H and 50% T. Therefore players have 50% opportunity in the winning group. Given 5 interns, there are two combination of lossing group (4 vs. 1 and 3 vs. 2). EV=0.5*0 + 0.5*(0.5*-5 + 0.5*-2.5) = -1.875.The probability of winning isn't 50%. It's actually slightly above 50%, but that's not the way to look at it. The total number of coffees that need to be bought is n, where n is the number of interns. Going into this game, every intern is the same so they each have the same expected value, and the sum of the expected values must equal -n. So everyone has an EV of -1 as claimed.Thank you very much. I finally got it.I'm still not sure why you said no if the EV is the same?It's a matter of risk preference. See the example I gave in my Mar 19 posting: "Suppose I give you a choice of two outcomes. Either you get \$1 with 100% probability or I give you \$500,000,000 with probability 1/100,000,000 and 0 otherwise. Which would you pick? Now what if it was the same first choice, but the second choice was \$50 with prob 1/10 and 0 otherwise? Now which would you pick? These are the kinds of things you want to think about while answering this kind of question." You would probably not play in the first instance and consider possibly playing in the second. (And those instances even have the risky game with a HIGHER EV). This coffee game is the same kind of game, and even if the EV is the same in each case, the volatility is not the same. It is usually a good rule of thumb to take the lower volatility outcome if the EV is the same (think Sharpe Ratio here! Or think efficient frontier here! Both get the point across I think.). Does that help?I say yes. To play the game. Only because I'm prepared if I lost. The fact that I can afford to loose, makes me want to try my chance at winingI would play the game. Consider the expected price with n total interns splitting a total cost of P. That is P/2*E[1/(N)|you're paying]. This is simply equal to P/2*E[1/(N+1)] where n is a binomial now corresponding to n-1 interns. Notice that 1/n+1 is a concave function. That means that P/2*E[1/(N+1)] <= P/2*(1/E[N]+1)=P/2 * 1/( (n-1)/2 + 1)) = P / (n +1). So it pays to play on averageApologies, but both of the above answers are incorrect. The EV of playing is the same as the EV of not playing.Show More ResponsesNo way would I play! The most money I save is a dollar the most money I can lose ( in the case of five interns) is 4 dollars... That's a 400 percent downside verse a 100 percent upside ( kind of you cannot technically compute your return on zero dollars invested). Plus, I do not even drink coffee!Stupid...I don't have time to play games at work.The EV in both scenarios are not the same: it's clear when think about the case where there are n = 3 interns.I've already explains why they are the same. In either scenario there will be n cups of coffee bought (3 in your case) so total EV is -n (-3 in your case). In the game each person is the same as any other so their EV must all be the same. That is why in the game the EV is -1 (same as if they buy their own cup of coffee). To argue otherwise is to argue that either the total EV is not the number of coffees purchased or that someone has an unfair advantage in the game. Incidentally, if you are going to claim that someone who has given you the answer is wrong, you should provide more of a response than "go think about it."if v = pay by game -1 . Then all have five intern have same distributed V by symmetry 5EV=0 It's zero sum.

### Summer Trading Intern at Jane Street was asked...

Oct 4, 2011
 Say I take a rubber band and randomly cut it into three pieces. What's the probability that one of the pieces has length greater than 1/2 of the original circumference of the rubber band.9 Answers3/4Suppose you have two cuts on the rubber band placed randomly. The probability of having one segment greater than half the circumference is the probability that the third cut will be inside the combined range of 90* to either side of the cuts. Since the average distance between the first two cuts is also 90*, the combined range is 270*, or 3/4 of the circle.You need 3 cuts to end up with 3 pieces. The first cut doesn't matter. The second cut can also be anywhere and the largest piece will still be at least half the circumference. What matters is the third cut, which should lie in the same half as the second cut. So the probability is actually 1/2.Show More ResponsesThe correct answer is 3/4, as this problem is equivalent to the famous 3-points-on-semicircle problem. Why? If one of the pieces has length greater than 1/2 the circumference, then the three points of cutting must lie in the same semicircle. On other hand, if the three points of cutting lie on the same semicircle, then the longest piece must be at least 1/2 of the circumference.For reference to the 3-points-on-same-semicircle problem, see e.g., http://godplaysdice.blogspot.com/2007/10/probabilities-on-circle.html1/4 1 -3/4suppose I have two points whose minor arc distance is t <= 1/2. Then the range of semicircles covering both points gives an arc length of (1/2+1/2)-t = 1-t. say we fix the first point, tracing the second point around gives minor arc lengths from 0 to 1/2 and then 1/2 to 0. Therefore the answer is 2*integral (1-t) from 0 to 1/2, which is 2(1/2-1/8) = 3/4It's 3/4. Cut it into 1 piece make a line. Cut it close. Pretend the length is 100. If you cut the first at x=1, as long as it isn't between x=50-51, it will have a length greater than 50% so there's 99% chance. You can imagine that if the cut was infinitely close to the end it would be about 100%. Now cut at x=2 you can't do between x=50-52. For x=3 it's 50-53 etc. So when you get to right to infinitely close to 50 it is pretty much between x=50-100 so there is a 50% chance you hit your spot. (obviously 50-50 is 100%, but since this length is continuous there's little chance it lands on that point). Obviously since this is symmetrical you can see this pattern going from 50% to 100% at the other end. Since each point on the continuous line has the same probability of happening the answer is clearly 75%.This problem is also equivalent to the probability that, if you have a line segment from 0 to 1 and you make 2 random cuts on that line segment, what is the probability that the three resulting pieces do NOT make a triangle?

### Quantitative Researcher Summer Intern at Jane Street was asked...

Apr 17, 2011
 3) Poker. 26 red, 26 black. Take one every time, you can choose to guess whether it’s red. You have only one chance. If you are right, you get 1 dollar. What’s the strategy? And what’s the expected earn?12 Answersexpected earn is 25 cents. 1/2*1/2*1, prob of choosing to guess is 1/2, prob of guessing right is 1/2, and the pay is \$1I would start picking cards without making a decision to reduce the sample size. This is risky because I could just as easily reduce my chances of selecting red by taking more red cards to start, as I could increase my chances of selecting red by picking more black cards first. But I like my chances with 52 cards, that at some point, I will at least get back to 50% if I start off by picking red. Ultimately, I can keep picking cards until there is only 1 red left. But I obviously wouldn't want to find myself in that situation so I would do my best to avoid it, by making a decision earlier rather than later. Best case scenario, I pick more blacks out of the deck right off the bat. My strategy would be to first pick 3 cards without making a decision. If I start off by selecting more than 1 red, and thus the probability of guessing red correctly is below 50%, then I will look to make a decision once I get back to the 50% mark. (The risk here is that I never get back to 50%) However, if I pick more than 1 black card, then I will continue to pick cards without making a choice until I reach 51% - ultimately hoping that I get down to a much smaller sample size, and variance is reduced, while odds are in my favor that I choose correctly. The expected return, in my opinion, all depends on "when" you decide to guess. If you decide to guess when there is a 50% chance of selecting correctly, then your expected return is 50 cents (50% correct wins you \$1 ; 50% incorrect wins \$0 --- 0.5 + 0 = .5) If you decide to guess when there is a 51% chance of selecting red correctly, then the expected return adjusts to (0.51* \$1) + (0.49 * \$0) = 51 cents. So, in other words, your expected return would be a direct function of the percentage probability of selecting correctly. i.e. 50% = 50 cents, 51% = 51 cents, 75% equals 75 cents. Thoughts?There is symmetry between red and black. Each time you pull a card it is equally likely to be red or black (assuming you haven't looked at the previous cards you pulled). Thus no matter when you guess you odds are 50% and the expected return should be 50 cents.Show More Responsesscheme: guess when the first one is black, p(guess) x p(right) x 1=1/2 x 26/51=13/510.5, just turn the first card to see if it's red. I think it's more about trading psychology. If you don't know where the price is going, just get out of the market asap. Don't expect anything.The problem should be random draw card and dont put it back. Every draw you have one chance to guess. So the strategy is after first draw you random guess it's red. If correct you get one dollar, next draw you know there is less red than black. So you guess black on next draw. Else if first guess you are wrong, you guess red on next round. It's all about conditioning on the information you know from the previous drawingsThis should be similar to the brainteaser about "picking an optimal place in the queue; if you are the first person whose birthday is the same as anyone in front of you, you win a free ticket." So in this case we want to find n such that the probability P(first n cards are black)*P(n+1th card is red | first n cards are black) is maximized, and call the n+1th card?The problem statement is not very clear. What I understand is: you take one card at a time, you can choose to guess, or you can look at it. If you guess, then if it's red, you gain \$1. And whatever the result, after the guess, game over. The answer is then \$0.5, and under whatever strategy you use. Suppose there is x red y black, if you guess, your chance of winning is x/(x+y). If you don't, and look at the card, and flip the next one, your chance of winning is x/(x+y)*(x-1)/(x+y-1) + y/(x+y)*x/(x+y-1) = x/(x+y), which is the same. A rigorous proof should obviously done by induction and start from x,y=0,1.The answer above is not 100% correct, for second scenario, if you don't guess, and only look, the total probability of getting red is indeed the same. However, the fact that you look at the card means you know if the probability of getting red is x/(x+y)*(x-1)/(x+y-1) or y/(x+y)*x/(x+y-1). Therefore, this argument only holds if you don't get to look at the card, or have any knowledge of what card you passedDoesn't matter what strategy you use. The probability is 1/2. It's a consequence of the Optional Stopping Theorem. The percent of cards that are left in the deck at each time is a martingale. Choosing when to stop and guess red is a stopping time. The expected value of a martingale at a stopping time is equal to the initial value, which is 1/2.My strategy was to always pick that colour, which has been taken less time during the previous picks. Naturally, that colour has a higher probability, because there are more still in the deck. In the model, n = the number of cards which has already been chosen, k = the number of black cards out of n, and m = min(k, n-k) i.e. the number of black cards out of n, if less black cards have been taken and the number of red cards out n if red cards have been taken less times. After n takes, we can face n+1 different situations, i.e. k = 0,1,2, ..., n. To calculate the expected value of the whole game we are interested in the probability that we face the given situation which can be computed with combination and the probability of winning the next pick. Every situation has the probability (n over m)/2^n, since every outcome can happen in (n over m) ways, and the number of all of the possible outcomes is 2^n. Then in that given situation the probability of winning is (26-m)/(52-n), because there are 26-m cards of the chosen colour in the deck which has 52-n cards in it. So combining them [(n over m)/2^n]*[(26-m)/(52-n)]. Then we have to sum over k from 0 to n, and then sum again over n from 0 to 51. (After the 52. pick we don't have to choose therefore we only sum to 51) I hope it's not that messy without proper math signs. After all, this is a bit too much of computation, so I wrote it quickly in Python and got 37.2856419726 which is a significant improvement compared to a basic strategy when you always choose the same colour.dynamic programming, let E(R,B) means the expected gain for R red and B blue remain, and the strategy will be guess whichever is more in the rest. E(0,B)=B for all Bs, E(R,0)=R for all Rs. E(R,B)=[max(R,B)+R*E(R-1,B)+B*E(R,B-1)]/(R+B). I don't know how to estimate E(26,26) quickly.

### Trading Intern at Susquehanna International Group was asked...

Jan 8, 2012
 I have 10 cards face-down numbered 1 through 10. We play a game in which you choose a card and I give you the corresponding dollar amount. a) What is the fair price of this game? b) Now, after picking a card you can either take the dollar value on the card or \$3.50. Also, cards worth less than 5 are now valued at \$0. What is the maximum price you are now willing to pay for the game?9 AnswersFair value is 5.5 2nd game: FV = (3.5 x 5 + 6 + 7 + 8 + 9 + 10)/10 = 4.75You decide on taking the dollar value on the card or \$3.50 *before* seeing the number on the card. And you get \$5, if you open that card, not \$0. So: 2nd game: (1/2)*3.5 + (1.2)*(1/10*[5+6+7+8+9+10]) = 43.5(.4) + (4.5)*.6 = \$4.10 3.5 because this is what 40% will select, 4.5 as this is the avg of 5-10*1/10. There is not a 50% chance of selecting 3.5 @Your are WRong @ EASY. There is not a 10% chance of receiving \$3.5 as numbers under 5 are worthless this will /should change your perspective.Show More ResponsesFJM is correct. "Cards worth less than 5 are now valued at \$0" - that does not include card number 5.how does 4.10 make sense if 4.5 is the ev of taking the card..ii think you are incorrect. can you explain your reasoning. using 1/10 to take your average and then multiplying by .6 seems like you are double countingTotal amount=3.5*4+(5+6+~+10) Expected value=total/10=5.9 @FJM no idea where your 4.5 coming fromOK agreed with xinzhuo. Should be 7.5 instead of 4.5 in FJM's calculation, which gives 5.9.Agreed. I double counted. 3.5*.4 + 7.5*.6 3.5 was given 7.5 =(5+6+7+8+9+10)/6"You decide on taking the dollar value on the card or \$3.50 *before* seeing the number on the card." 2nd game: Obviously, the game price should be no less than 3.5, otherwise there is an opportunity for arbitrage. Now that the price is higher than 3.5, it makes no sense for a rational player to choose just take 3.5 dollar and exist the game. The player should always choose to bet. And the expectation for betting is (5+6+...+10)/10 = 4.5.

### Software Development Engineer Intern at Amazon was asked...

Feb 15, 2012
 To find and return the common node of two linked lists merged into a 'Y' shape.13 Answershow did the two linked lists make their poses to merge into a 'Y' shape, one's head attached to the waist? please explain more to help understand the questionThe two linked lists were something like: 1->2->3->4->5 and 3->4->5->6->7->8.For a Y shaped like this: 1 -> 2 -> 3 -> 4 ->7 -> 8 -> 9 5 -> 6 -> 7 -> 8 -> 9 where the trunk portion is of constant length, it is easy. Get the length of the first list. In our case 7. Get the length of the second list: 5. Difference is 2. This has to come from the legs. So, walk the difference in the larger list. Now node1 points to 3. node 2 points to 5. Now, walk through the two lists until the next pointers are the same.Show More Responses@kvr what if the lists are 1-2-3-4-7-8-9 and 12-13-14-5-6-8-9Can this be done using hash tables? Or is there anything more efficient?i think that kvr's answer is the best. @snv if the two lists are linked by the very last two nodes, then you would find out after you are checking the values of the second two last two nodes. you just got unlucky and basically have to check until the very end. so basically, as a diagram with your example, it would look like this 1 -2 -3 -4-7-8-9 x -x -x -x -x-o 12-13-14-5-6-8-9 (sorry about spacing) but because you know the difference in length is 0, you can start comparing the two lists of nodes one by one. from the very beginning.HASH TABLE seems the only efficient wt. 1. add each element's address (of the smallest list)and push it to the hash table. 2. start walking second list. 3. get element compar eits address with hash table if match is found in hash table, return 4. if list is not exhausted, go to step 2. 5. return NULLHashtable is complete overkill. The point is to realize that the two linked lists have the same tail. That means if you traverse them with the same index but from the right you will eventually find the first similar node. It's almost as easy if the problem said the two linked lists had the same prefix, find the first node on which they split. Here you walk them with the same index from the left.First reverse both list and find point where both gets divergedFor Y condition the list could be List 1: 1->2->3->4->5->6 List 2: 7->8->9->4->5->6 So reverse list would be 6->5->4->3->2->1 6->5->4->9->8->7 now compare two list and move forward the position where you find next node of both are different is the point of mergingSome of the above will work for doubly linked list. If not, travel node by node simultaneously from each end. When one traversal ends and the postion of cursor at the traversal is the answerkvr's answer is good but I think it could be optimized better by using 2 stacks. Traverse both lists putting each value into 2 separate stacks. Then when both are fully traversed, the head of each stack will match. Pop one off each at a time till they don't match, return the last popped. But I suppose it comes down to where the first match is at. If its the beginning of the list, kvr's answer will be better, if its at the end or bottom half 2 stacks would be better.Let's say L1 is the list starting with the lower number, and L2 is the other Set X = Head of L1 Set Y = Head of L2 While X <= Y Set X = Next(L1) End While If (X==Y) Return X Else While Y<=X Set Y = Next(L2) End While If X==Y Return X End If End If Repeat until you reach the end of either list.

### Software Engineering Intern at Google was asked...

Oct 14, 2011
 how would you find the shortest path between two nodes in a social network? 7 Answersdo breadth first search from both ends at the same time. Keep a set of all nodes that each has reached. When the sets have any element in common, there is a path.Does the above method have any advantage over the method in which we do bfs from one node of the nodes and stop when the other node is reached?BFS from both sides is massively faster than just doing BFS from one. Suppose each person has k friends and that the two nodes are d apart. BFS from one node is O(k^d). BFS from both nodes is O(k^(d/2)) -- the exponent is half as big. To put some example numbers on it, if each person has 100 friends and they are 10 apart, then BFS from one node takes 10^20 operations, whereas BFS from both nodes is 2*100^5= 200 billion operations. BFS from one node is intractable. BFS from both nodes is slow, but tractable.Show More ResponsesHow about using Dijkstra's shortest path algorithm? Isn't that more efficient than a bfs?If you only care about the distance between two nodes and every edge length is 1 (both of which are true in this problem), then Dijkstra's shortest path algorithm basically is breadth first search (and BFS from both sides is faster than a simple BFS). If that doesn't make sense, then explain how http://en.wikipedia.org/wiki/Dijkstra's_algorithm#Pseudocode is different from a breadth first search in this case and I will clarify.Aren't you ignoring the time taken for checking for a common element in the two sets (which will be O(k^d))?Checking for set inclusion is constant time (assuming a reasonable hashset). Thus, it is O(1) to know whether or not a node that I add to one side's fringe is already in the other side's fringe. Does that make sense?

### Software Engineer Intern at Facebook was asked...

Oct 1, 2013
 Given two strings representing integer numbers ("123" , "30") return a string representing the sum of the two numbers ("153")13 AnswersI don't understand...it's a very stupid question! return Integer.toString(Integer.parseInt("123") + Integer.parseInt("30));It's not stupid a stupid question. What if the strings have 10000 characters?It's not stupid question, but it's not hard either. I believe the way to do it is to implement the manual addition process by looping through the digits starting from the right to left and adding them one by one. This is an O(N) operation. I'm not sure if there is a better way to do it.Show More Responseslol it is a stupid question i agree. All you have to do is parse the strings add em parse em again and return emIt is basic but yet not stupid. I assume that the interviewer asked to implement atoi and itoa (in case the interview was in C/C++).The interviewer wanted a loop through the digits starting form right to left, adding them one by one, and keeping track of the carriage.public static String sumStrings(String a, String b){ char[] num1 = a.toCharArray(); char[] num2 = b.toCharArray(); int i = num1.length - 1; int j = num2.length - 1; StringBuilder sumString = new StringBuilder(); int carry = 0; while(i >= 0 || j >= 0){ int d1 = 0; int d2 = 0; if (i >= 0) d1 = num1[i--] - '0'; if (j >= 0) d2 = num2[j--] - '0'; int sum = d1 + d2 + carry; if (sum >= 10){ carry = sum / 10; sum = sum % 10; }else carry = 0; sumString.insert(0, sum); } return sumString.toString(); }public class StringToInt { public int stringToInt(String str) { int tens = 1; int num = 0; for(int i = 0; i < str.length(); ++i) { num += (str.charAt(str.length() - 1 - i) - '0') * tens; tens *= 10; } return num; } public int addStrings(String str1, String str2) { return stringToInt(str1) + stringToInt(str2); } public static void main(String [] args) { StringToInt s = new StringToInt(); System.out.println(s.addStrings("145", "23")); } }@Conner What if the strings are 1000 characters long? does your int tens and int num variables support that?int stringToNumber(char *a){ char *end = a; int it = 1; int acum = 0; while (*end != NULL){ end++; //move pointer to last char of string } while (&end != &a){ acum+=((*end - '0') * it); it *= 10; end--; } return acum; } int sum (char *a, char *b){ return stringToNumber(a) + stringToNumber(b); }import java.util.Arrays; import java.util.Scanner; public class AddNumericStrings { public static void main(String[] args) { final Scanner in = new Scanner(System.in); while (true) { System.out.println("Enter 2 numeric strings : "); String x = in.nextLine(); String y = in.nextLine(); System.out.println(add(x.toCharArray(), y.toCharArray())); } } private static char[] add(char[] big, char[] small) { char[] result = new char[big.length + 1]; Arrays.fill(result, '0'); for (int i = big.length - 1, j = small.length - 1; i >= 0 || j >= 0; i--, j--) { char x = big[i]; char y = '0'; if (j >= 0) { y = small[j]; } int val = x - '0'; val += (y - '0'); result[i+1] += val % 10; if (val > 10) { result[i] += (val/10); } } return result; } }You all know that negative integers exist, right? The question does not specify if the integers are non-negative. One just assume, therefore, that negative integers are possible. It would not be called subtraction. Subtraction does not exist. It would just be addition of the additive inverse.https://github.com/codelucas/puzzles/blob/master/java_interview/StringNumAdd.java

### Intern at Google was asked...

Dec 30, 2011
 Find occurrences of a number in sorted array (allow duplicates).8 AnswersO(logN) requiredUse binary search to find the number (O(logN)). Once that is done you can search around that index. Though that could become O(N). Better answer: run binary search again twice once you have found the number the first time. For the upper half and the lower half - so total run time is O(logN)I think it is not possible in O(logN) because anyway traversing the complete array is needed to access all the numbers. Worst Case scenario would be the last number repeating itself 5 times. For example : {1,2,2,3,3,3,4,4,4,4,5,5,5,5,5} and if you are asked to find out the occurrences of number '5' then you have to traverse the complete array once which means the complexity should be O(N).Show More ResponsesJAVA Code : // For Sorted and UnSorted Array in O(N) time Complexity. import java.util.HashMap; public class FindOccurrences { /** * @param args */ public static void main(String[] args) { int unsortedArray[] = {4,6,7,8,4,5,6,8,3,2,4,5,7,8,9,3,4,6,7,8}; System.out.println(findOccurrencesUnsorted(unsortedArray,8)); int sortedArray[] = {1,2,2,3,3,3,4,4,4,4,5,5,5,5,5}; System.out.println(findOccurrencesSorted(sortedArray,5)); } private static int findOccurrencesUnsorted(int[] array, int number) { HashMap map = new HashMap(); for(int i=0;iO(logN) is possible for sorted arrays, the key here is to firstly check beginning and end to determine next binary-search steps, I attached my code below, and I think this is O(logN) in worst case. //inputs are the sorted array, k is the number looking for //start and end are two index values, main method calls with (start=0) and (end=length-1) int GetOccurance(int[] inputs, int k, int start, int end) { if(startk) return 0; if(inputs[start]==k&&inputs[end]==k) return end-start+1; int mid = (start+end)/2; if(input[mid]k) return GetOccurance(inputs, k, start, mid-1); else return GetOccurance(inputs, k, start, mid-1)+1+GetOccurance(inputs, k, mid+1, end); }With the last algorithm, how does it ever return anything but 0? If main calls with start := 0 and end := length - 1? The only case where you get past the base case is if the array is length 1... since (start < end) = (0 < length -1) from the input... and returns 0...Also, I think there is a good shot at a stack overflow with this method. I would add an extra variable and solve the problem tail recursively.Try this. int findLeftEdge( int A[], int num, int val ){ int mid = num/2; if( num == 1 ){ return 0; }else{ if( A[mid] == val && A[mid-1] != val ){ return mid; }else if( A[mid] == val || A[mid] > val ){ return findLeftEdge( A, mid, val ); }else if( A[mid] val ){ return findRightEdge( A, mid, val ); } } } int countOccurrenceSortedArray( int A[], int num, int val ){ int left = findLeftEdge( A, num, val ); int right = findRightEdge( A, num, val ); return right-left+1; } int main() { // your code goes here int A[] = { 1, 5, 10, 10, 12, 19, 19, 19, 20, 20}; int ans = countOccurrenceSortedArray( A, sizeof(A)/sizeof(int), 20 ); cout << ans << endl; return 0; }

### Software Development Engineer Intern at Microsoft was asked...

Aug 13, 2013
 Determine if an array from 1..n has a duplicate in constant time and space.12 AnswersCorrect answer is to place each value in the array in its corresponding index (i.e. if array[x] = 3, put 3 into array[3]). If an index already contains its corresponding value, there's a duplicate.^^ Sorry, that's linear time *and* at best linear space, you fail.What are the properties of an array that affect time complexity? Usually we're talking about the size of the array, N, such that linear time operations, O(N), are those that perform an operation on each of the elements in the array. However, an important thing to consider is that you can evaluate N (the size of the array) itself in constant time. The only way this can be done in constant time is if the input satisfies the precondition that "1..n" means there are no *missing* values in that range. In other words, such an array from "1..5" must contain at least one instance of the numbers 1, 2, 3, 4, and 5. With that precondition, you know that the length of the array will be 5 if no duplicates and greater than 5 if it does contain duplicates.Show More ResponsesSUM(array) - (N(N+1)/2) = missing number.@Facebook Intern: That wont help .. In case there are 2 duplicates this can be broken. {1, 1, 4, 4}I attach pseudo code here: FindDuplicate(A) i = A.length j = 1 count = 0 while j < i if A[--j] - j == 0 j++ else count++ return count count holds the number of duplicated itemsThis cannot be done in linear time unless the data-structure used to hold the integers has a property that immediately flags duplicates upon insertion. For e.g. like in a Dictionary/HashMap.I'm pretty sure OP posted something wrong there, and they were probably asking to do it in linear time and not constant. If it's constant, the way I would do it would be using a HashSet to check if the key (value in array) is contained, and if not add it to the set. If at any time I find an element that's already inside, return false;If an array has elements from 1 to n the size of the array will be n, thus if the size of the array is greater than n, there must exist a duplicate.I think they are just asking to determine if there is any duplicate number ir not. Its not mentioned to find out which number it is. So that we can find out by using sum of n numbersI think they are just asking to determine if there is any duplicate number ir not. Its not mentioned to find out which number it is. So that we can find out by using sum of n numbersThey asked for constant time. So checking sum will not work. For zero indexed arrays, Check if arr[len(arr)-1] == len(arr) - 1.
2130 of 30,805 Interview Questions

More