Some people just believe there is no innate knowledge or we dont need it if we just scale/learn better (in the direction of Bitter Lesson)
(ML) Academia is also heavily biased against it due to mainly two reasons:
- Its harder to publish, since if you learn Task X with innate Knowledge, its not as general, so reviewer can claims its just (feature) engineering - Which hurts acceptance, so people always try to frame their work as general as possible
- Historical reasons due to the conflict the symbolic community (which rely heavily on innate knowledge)
Seems like a band-aid solution for a broken system.
But in general science will have to deal with that problem. Written text used to "proof" that the author spend some level of thought into the topic. With AI that promise is broken.
It's going to be really funny when the NIH eventually sits down the professors, hands them blue exam booklets, and makes them write proposals in freehand.
The question is: is AI breaking the system, or was it always broken and does AI merely show what is broken about it?
I'm not a scientist/researcher myself, but from what I hear from friends who are, the whole "industry" (which is really what it is) is riddled with corruption, politics, broken systems and lack of actual scientific interest.
Its quite funny to see that LLMs reviewed interest in KnowledgeGraphs/Reasoning/Triple Stores etc... since (on a high level) they both are often pitched to solve the same goal. (E.g. Ask an AI about a topic...)
If you think about it, I think it makes a lot of sense. The main impediment to the usefulness of knowledge graphs were always how to build them as turning unstructured data into structured data at scale is difficult. Now that it's something at which LLMs are pretty good at. It makes a lot of sense.
And sampling from a (now fixed) distribution can be made deterministic...
So the total generation of text from an LLM can be made fully deterministic. The problem for scientists is that we cant do that in the deployed systems...
You can set the temperature to zero in most APIs, which gives deterministic output. The only problem with that is some models produce inferior results with zero temperature, including lots of slop and AI-isms.
"The goal of Automated driving is not to drive automatically but to understand how anyone can drive well"...
The goal of DeepBlue was to beat the human with a machine, nothing more.
While the conquest of deeper understanding is used for a lot of research, most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before. (Understanding human intelligence is nowadays a different field)
Seems like you missed the point too: I'm not talking about DeepBlue, I'm talking about using the game of chess as a "lab rat" in order to understand something more general. DeepBlue was the opposite to the desire of understanding "something more general". It just found a creative way to cheat at chess. Like that Japanese pole jumper (I think he was Japanese, cannot find this atm) who instead of jumping learned how to climb a stationary pole, and, in this way, won a particular contest.
> most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before.
Yes, and that's a bad thing. I don't care if shopping site recommendations are 82% accurate rather than 78%, or w/e. We've traded an attempt at answering an immensely important question for a fidget spinner.
> Understanding human intelligence is nowadays a different field
Start-ups deliver something much more complicated/different than these research projects.
If the whole research project at the end actually delivers a somewhat coherent prototype, it's seen as a huge success.
Most start-ups start with a proof-of-concept prototype to transform it into an economically viable product.
So, comparing these success rates does not make sense. Multiple research groups can deliver rough prototypes at the end and celebrate their "huge success". In most fields, there can be only a few economically surviving startups...
Teaching is not really relevant in the hiring process of professors.
I saw several committees for prof position and teaching is treated like a checkmark. You should done it and provide a small sample lecture (which you prepare much more than your average lecture) and don't have to suck at it. After this checkbox, the differentiating factors are about citations and how much grant money you can/could/do have... (Western Europe, maybe somewhere else it's different).
I would say teaching is not that relevant specifically for most tenure-track positions at big research universities. It is absolutely something you need to demonstrate some actual experience and proficiency with if you get a position at a small liberal arts college or a community college, where tuition is basically how they keep the lights on.
I’d also say that even at an R1, teaching volume at an acceptable quality is sometimes rewarded if your college within the university is very undergrad-heavy, because it can be part of how the university apportions funds to departments. So, it wouldn’t matter at a med school, but potentially a little in arts and sciences, though still in distant second to research.
There are also a small but increasing number of tenure track teaching-focused positions at big research universities. These folks typically help design and teach the biggest intro lectures and/or other very time- and labor-intensive courses. There are fewer of these positions than I’d like to see in an ideal world, but not zero.
Can confirm this. Teaching is a pass / fail grade for new professors to get tenure in most places. Your ability to get grant funding and publish highly cited papers are astronomically more important to the university than your teaching abilities.
I was told by a seminar speaker one time about a pre-tenure professor who was awarded his University's highest teaching award one day. The very next day he was denied tenure because he didn't publish enough.
I recommend reading the book "Tenure Hacks" [1] to anyone interested in pursuing a career as a tenured professor. I don't agree with all of the points in the book, but it is an important and eye-opening alternative perspective to the typical narrative surrounding academic positions.
I feel terrible for the idea of jugding academics based on the amount of grant money they can get... It feels like encouraging a lot of smart people to find ways to waste money, even when they know that they don't really need that much for their project.
It does distract from the process of actually doing research, but I will point out that funding agencies take past productivity into account, so you can’t do literally nothing with the money and expect a grant to get renewed.
I don't have any subscriptions to the latest models but what improvements have you noticed in scaling and language understanding? Last I checked people were still discussing "9.9 > 9.1" and "How many 'r's are in 'strawberry'".
While there are a few papers on using KG/Ontologies to enhance training, this is really far from the mainstream, and i would be surprised if it would be used anywhere (outside of a research paper)
Some people just believe there is no innate knowledge or we dont need it if we just scale/learn better (in the direction of Bitter Lesson)
(ML) Academia is also heavily biased against it due to mainly two reasons: - Its harder to publish, since if you learn Task X with innate Knowledge, its not as general, so reviewer can claims its just (feature) engineering - Which hurts acceptance, so people always try to frame their work as general as possible - Historical reasons due to the conflict the symbolic community (which rely heavily on innate knowledge)