Was interviewing for a role. Interviews lasted for 7 months total, 12 interviews, for 2 teams, and then they closed the roles and didn't hire anyone. Not really impressed by Apple.
I had a similar story. But it makes sense.
Because of the image and brand value they project, they get a lot of people who just want to work for them because of that. Thus, they have a lot of options and can be wasting people's time without much downside since they have the bankroll to finance all that inefficiency.
But it's really not fair for the people applying, that's for sure.
In any case, I don't think it's worth applying for a job at Apple unless you already are a well-known (semi)authority in your field so you can have a minimum amount of power to somewhat dictate the terms.
Apple treats their supplier very badly, there is no reason they would do otherwise with people they don't really need.
If Apple were to be personified it would be the narcissistic mean girl that is extremely popular because of her beauty.
All I need is the proportion of the qualifying numbers to the input array to run the algorithm and the number of samples. Then we can sample min, max index of the qualifying array and return their difference without having to sample many times if we can derive the joined min max distribution conditional on the Bernoulli.
In other words the procedure can take any input array and qualifying criteria.
The joint distribution is relatively simple to derive. (This is related to the fact that min, max of continuous uniform on 0, 1 are Beta distributions.)
Sampling doesn't give you the actual answer for an actual array. If the program uses the array for multiple things, such as organizing the numbers after allocating the correct number of buckets, your method will cause logic errors and crashes.
The O(1) method based on statistics only works when the function making this calculation can hide the array (or lack of array) behind a curtain the entire time. If it has to take an array as input, or share its array as output, the facade crumbles.
The prompt is not "generate this many random numbers and then say max qualifying minus min qualifying". If it was, your method would give valid solutions. But the prompt starts with "Given a list".
In the article, we let ChatGPT generate the random numbers as a matter of convenience. But the timing results are only valid as long as it keeps that part intact and isolated. We have to be able to swap it out for any other source of random numbers. If it invents a method that can't do that, it has failed.
It depends on how you read the problem still. In a lot of the llms solutions the array is not provided in the solving functions but rather constructed inside (as instead of defining the function with an input and then creating a main function that would be called with no argument, construct an array and call the solving function with that as argument, as typical in python), so I assume the llm did not read it like this or also failed this aspect of the code (which was never really mentioned). It is not clear if we are given a specific array of integers or one input is an array of random variables that we need to instantiate ourselves.
Next step would be to propose hardcoding 99930-3999 as the O(1) result and live with the output just being wrong sometimes. The bug rate is then in the ballpark of most modern software, including LLMs', so I'd say ship it.
> There is no record or credible report indicating that Jeff Baena has passed away. As of the most recent information available, he is still alive. My training data includes information up to October 2023. Events or details that emerged after that date may not be reflected in my responses.
Agree with your comment. I almost didn't click on it because I wouldn't be interested in a C# or .Net project and was very pleasantly surprised. I don't love the name.