Umm, so we still have to build enough traditional (and, ideally, dispatchable) generation capacity to make sure we can cover our electricity needs during those periods in winter where it's very cloudy and it's not windy?
"Cloudy and still weather has caused Great Britain’s renewable energy output to fall to near zero this week"
"Britain’s wind power output fell to just above zero on Wednesday, which, combined with the cold, dark weather, caused the market price for electricity to climb to almost £250 per megawatt-hour at auction, or almost seven times the average price before the pandemic"
"The sudden drop-off in renewable energy due to dull windless winter weather, known as dunkelflaute in German, has also forced the system operator to pay gas power stations more than £500/MWh to run on Wednesday evening when household demand is expected to reach its peak.
The weather conditions – the third dunkelflaute of the winter so far – left Britain’s electricity grid reliant on gas-fired power stations. They accounted for more than 70% of power generation at points on Wednesday."
I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future. Other people in I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future.
I only ever learned it in school, but if memory serves, Prolog is a whole "given these rules, find the truth" sort of language, which aligns well with these sorts of problem spaces. Mix and match enough, especially across disparate domains, and you might get some really interesting things derived and discovered that are low-hanging fruit just waiting to be discovered.
I’ve worked at a company where all we had were large, unified, post-request logs and honestly those logs were orders of magnitude easier to work with than having to coallate lots of logs into a single concept. ELK liked those giant logs more, too.
It does help that the system was built from the ground up to use the big logs.
I think the best feature was that we would extract and build a ton of keys into the full-size log, so you could find all transactions related to X really, really easily, where X was a wide variety of things.
Every place I’ve been elsewhere the logs have always been way harder to parse, process or even just find. It would always take several disparate queries to get everything.
E.g. “oh, we need to find something related to X? Well okay, that gives us 50 different separate calls, now let’s do another search on each of those calls’ unique transaction id and review each whole transaction individually” vs “I need X. Cool, here’s all 50, complete calls.”
Edit: to be clear, throughout the code it was just “context.info()”, over and over again, like regular logging, it was just shoved into a big object at the end of all non-application-crashing exceptions. And the application was built to not fully crash, like, ever. (And this worked)
Or possibly “in addition to”, yeah. I think this is where it needs to go. We can’t keep training HUGE neural networks every 3 months and throw out all the work we did and the billions of dollars in gear and training just to use another model a few months.
That loops is unsustainable. Active learning needs to be discovered / created.
I don't think old prompts would become useless. A few studies have shown that prompt crafting is important because LLMs often misidentify the user's intent. Presumably an AI that is learning continuously will simply get better at inferring intent, therefore any prompts that were effective before will continue to be effective, it will simply grow its ability to infer intent from a larger class of prompts.
That depends on the goals of the prompts you use with the LLM:
* as a glorified natural language processor (like I have done), you'll probably be fine, maybe
* as someone to communicate with, you'll also probably be fine
* as a *very* basic prompt-follower? Like, natural language processing-level of prompt "find me the important words", etc. Probably fine, or close enough.
* as a robust prompt system with complicated logic each prompt? Yes, it will begin to fail catastrophically, especially if you're wanting to be repeatable.
I'm not sure that the general public is that interested in perfectly repeatable work, though. I think they're looking for consistent and improving work.
They were trying to compete with an existing, VERY good couple of alternatives, and the people most actually likely to use that product were already on those services.
It was a losing play that didn’t know what market it was actually entering.
VRChat is the most popular one. Age verification. User generated models. User generated worlds. Revenue sharing in worlds. For-sale models and props. It’s quite feature rich now.
They're not all deranged! Some are completely productive, functional furries. Probably. Maybe.
Also, your statement is far too reductive! There's plenty of avatars with scales! Also, don't forget the anime girls that are actually middle-aged men and the occasional sentient burrito.
The suspiciously wealthy software developers, astronauts, pharmacists, game devs and artists that build high quality 3d models, Blender and Substance Painter tutorials and add-ons that prop up a good percentage of the VR headset market, Patreon market, and have a thriving artisan ecosystem?
What do you expect? Did you see the movie ready player one? This kind of experience is ideal for furries and cosplay types and they featured in the movie too.
If you can be anything, it makes sense it attracts people who want to not be what they already are.
reply