It’s been over a month since my last post. I’ve started going to the hands on machine learning book club hosted by the San Diego Machine Learning group. The book of the semester is Hands-On Machine Learning with Scikit-Learn, Keras & Tensorflow.
I’m leading the discussion on Chapter 9. We broke the discussion up into two segments. Part I, clustering was provided on Saturday.
There’s a peeve of mine that this book has satiated through it’s over usage of big O notation. When I first took a close look at big O notation, I recall being incredibly angry and frustrated. “What the hell is that?”
Big O notation, for some that don’t know, is a notation that’s widely used and accepted across the industry to summarize the amount of time an algorithm will take to process. The fewer number of iterations that a process has, the faster it will process; presumably.
I like the “as few iterations as possible concept”, but when it’s referred to as “Big O”, I used to get a little peeved. It makes it seem like we’ve taken my favorite physical activity and decided that we should be ‘getting it over with ASAP,’ and calling it a night.
A fun thing that was brought to my attention during the clustering discussion I led this weekend. The methods that you’re using are often combined/paired with other techniques. This shed light on the whole situation and gave me a whole new perspective on this whole BIG O thing.
Of course you want to achieve the goal as soon as possible – so you can move on to the next one. Which occurs until the project is complete.
Whew. That’s a much better way of looking at it.
I’ll be leading the discussion for part II, Gaussian Mixture Models on Saturday. More information can be found here.
Leave a Reply