

The typical holder of a four-year degree from a decent university, whether it’s in “computer science”, “datalogy”, “data science”, or “informatics”, learns about 3-5 programming languages at an introductory level and knows about programs, algorithms, data structures, and software engineering. Degrees usually require a bit of discrete maths too: sets, graphs, groups, and basic number theory. They do not necessarily know about computability theory: models & limits of computation; information theory: thresholds, tolerances, entropy, compression, machine learning; foundations for graphics, parsing, cryptography, or other essentials for the modern desktop.
For a taste of the difference, consider English WP’s take on computability vs my recent rewrite of the esoteric-languages page, computable. Or compare WP’s page on Conway’s law to the nLab page which I wrote on Conway’s law; it’s kind of jaw-dropping that WP has the wrong quote for the law itself and gets the consequences wrong.
I’m most familiar with the now-defunct Oregon University System in the USA. The topics I listed off are all covered under extras that aren’t included in a standard four-year degree; some of them are taught at an honors-only level and others are only available for graduate students. Every class in the core was either teaching a language, applying a language, or discrete maths; and the selections were industry-driven: C, Java, Python, and Haskell were all standard teaching languages, and I also recall courses in x86 assembly, C++, and Scheme.