Hacker Newsnew | past | comments | ask | show | jobs | submit | pjrule's commentslogin

I’ve done something like this in the past with a spreadsheet, but I recently fell out of the habit because of the overhead of taking a few minutes at the end of a long day and opening the spreadsheet up on my laptop. I’d like some kind of cross-platform life logging app that is equally usable for longform entries when I have full access to a good keyboard and quick notes when I just have a phone and a few seconds of time. Some kind of highly customizable rich schema support (long text entries + basic biometrics like self-reported sleep) would be ideal. Any suggestions?


Checkout airtable. Functions like excel but you can make survey style webpages that will drop data responses directly into your database


Yeah, Hanson phrases this idea poorly. It might be better to say that creating large backroom software has something of a high barrier to entry. For instance, popular packages like DynamoDB (internal to Amazon.com before public release) and TensorFlow (internal to Google before public release) make certain types of work easier and more efficient; they’re also large enough that they couldn’t easily be reproduced by smaller competitors. Those companies had to pay the said “large fixed cost,” as well as a continuous cost for maintenance, to develop those tools. Economies of scale play a related role here—the more engineers you can afford to hire, the less you have to outsource and the more you can create in-house solutions that can be reused companywide and ultimately give you a competitive advantage long-term.

In fairness to Hanson, a natural monopoly is distinct from a “monopoly” in the popular sense. Public utilities are often natural monopolies—for instance, a city might have only one power company because it doesn’t make sense to build the redundant infrastructure for two power companies. Doing so would incur a high, unnecessary fixed cost. [1] I think Hanson’s trying to draw an analogy to such firms, though I find it a bit loose.

[1] https://en.m.wikipedia.org/wiki/Natural_monopoly


Ben Recht also has an excellent series of blog posts (very related to this survey on arXiv, but broader) on the intersection between reinforcement learning and optimal control. An index is available here: http://www.argmin.net/2018/06/25/outsider-rl/


I was just reading those last night. Definitely good stuff.


As someone working on a reinforcement learning/neuroevolution problem right now, I find this to be extremely exciting. Fewer parameters, ceteris paribus, is always better—the fact that the experiments in this paper were run on one workstation, rather than on a massive farm of TPUs à la AlphaGo, implies quicker development iteration time and more accessibility to the average researcher.

The staging of components in this paper (compressor/controller), where neuroevolution is only applied to a low-dimensional controller, reminds me of Ha and Schmidhuber's recent paper on world models (which is briefly cited) [1]. They employ a variational autoencoder with ~4.4M parameters, an RNN with ~1.7M parameters, and a final controller with just 1,088 parameters! Though it's recently been shown that neuroevolution can scale to millions of parameters [2], the technique of applying evolution to as few parameters as possible and supplementing with either autoencoders or vector quantization seems to be gaining traction. I hope to apply some of the ideas in this paper to multiple co-evolving agents...

[1]. https://worldmodels.github.io

[2]. https://arxiv.org/abs/1712.06567


You may be interested in an even older paper: http://www.idsia.ch/~juergen/icdl2011cuccu.pdf


Thanks so much! I read this (and a few related papers) today. Besides the novel algorithm discussed in the new Atari paper, do you have a reference implementation of online vector quantization you might be able to recommend? I think I could probably figure it out from the paper alone, but sometimes it's nice to see code other people have already optimized. :)


Uhm unfortunately I do not, I could search for some on Google but I doubt I would fare better than you at it. I went to code my own version, it is quite straightforward. You can find it here: https://github.com/giuse/machine_learning_workbench/blob/mas... although polluted by research's trial and error, you can easily check the minimal code necessary to run. Here's an example of how to use it: https://github.com/giuse/machine_learning_workbench/blob/mas... Let me know if that works for you or if you have further questions!


That’s excellent! Thanks!


>I hope to apply some of the ideas in this paper to multiple co-evolving agents...

care to elaborate?


Indeed, I had a similar experience. Perception is strange.


I had a similar experience too, but I'm not positive whether it's based entirely on perception (which is certainly possible) or some kind of bug in the demo. It would be nice to save individual Ogg files at particular points on the continuum and try more of a blind test to see how much influence there is from what we've just heard before. (I guess I could generate those myself easily enough with sox or something...)


While I was using the slider on my computer, I kept a laurel/yanny video playing on my phone

=> I think I can confirm there is no bug in the demo.


Wow, I experimented with it more and it's a matter of context, just as you and other people in this thread have described. That's really striking. Thanks for confirming that it's not a bug.

This is maybe an even more extreme phenomenon (from the same subreddit that was apparently involved in making the Laurel/Yanny thing go viral this past week):

https://www.reddit.com/r/blackmagicfuckery/comments/8jxzee/y...

With this video, I find that I consistently hear whichever phrase I'm thinking of at the moment! (In this case either "brainstorm" or "green needle".)

Edit: Also, if anyone in this thread likes this stuff then you might really enjoy

https://www.wnycstudios.org/story/91513-behaves-so-strangely

if you've never heard it before (or even if you have).


This is a fantastic idea, and I think I’ll definitely be applying! Have you considered offering two tiers of grants? Most AI side projects don’t necessarily require ~$30k in resources, but I’m sure many people, including myself, could benefit from ~$3k in GCE credit. Opening up the same total resources to more people would allow for a greater number of moonshots, which are what this grant seems to be about. :)


Great idea, and something we’re considering for future batches. We may even do it this batch, depending on the final mix.


I could use the cash and some of the credits, but most of it wouldn't be used. But maybe I'm one of few doing AI projects not involving a deep neural net? Hehe.


>maybe I'm one of few doing AI projects not involving a deep neural net

You're not. While progress in machine learning is astounding and is going to do great things for humanity, it is kind of frustrating that "AI" to many people these days means neural nets, deep learning, high-end GPUs, and huge datasets. Not every problem needs machine learning as a solution, and there are still some of us solving hard problems with classical AI and old fashioned data analysis. But it's not trendy right now.


What kinds of problems are best solved by AI, but not deep learning/neural nets?


Natural language processing is a big area where classical AI is heavily used and machine learning is lagging behind. ML is terrible at teaching computers to read and/or produce human writing. Compare what the company Automated Insights is doing for news articles versus neural networks writing books and movie scripts (with humorous results).

Anything with a known and well-defined set of rules can also be programmed more efficiently with decision trees/Markov chain than through machine learning. There's no need to use tons of data to train a model to do something over time when all it needs is the rule book.

Machine learning is basically advanced pattern matching, neural nets often require you to brute force the problem with tons of data. There are plenty of places where you're not matching patterns and you don't have tons of data but the computer still needs to make a decision after analyzing the data that is available.


Natural selection at work? ;)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: