In computer science, what mathematical notation is predominantly used to classify algorithms by describing the efficiency of their performance or their resource usage (like running time) as the input size grows?
💡 Explanation:
Big O notation provides a theoretical framework to classify algorithms based on how their processing requirements (such as execution time or memory usage) scale with the size of the input data, often focusing on the worst-case scenario.