As you can tell from his commentary, he thinks the machine is rather stupid (and he is winning after the opening) but the machine is a much better calculator than he is and when the situation becomes more concrete he has to force a draw.
Of course, the computer on his phone is considerably worse than the best chess engines, but top chess players generally consider computers to be excellent calculators but dumb in terms of general strategy.
> but top chess players generally consider computers to be excellent calculators but dumb in terms of general strategy.
Is computer assisted chess a thing? Perhaps with standardized hardware, but any software.
In chess I'm good at strategy but terrible at calculating and I miss obvious stuff all the time in my fight for strategy. I always thought I'd do great with computer assistance to look for the obvious stuff, and me telling the computer the long term strategy.
It was called Advanced Chess [1], now it's sometimes referred to as Centaur Chess. You can find online servers to play this kind of thing but it's mostly just a sort of curiosity. Computer-human pairs are well known to be better than just computers at chess though.
> In chess I'm good at strategy but terrible at calculating
Chess is 99% tactics/calculation. What we call "strategy" is just a set of heuristics that we use to avoid having to do endless calculations. However, a lot of those heuristics are already included in most chess playing software. So, if you're weak player as a whole, even if you have some strategy acumen, your contribution in an assisted chess setting will be negligible. The computer will be doing all the work anyway.
My rating is around 2000 and I have done some assisted chess playing and I can tell you that it's extremely hard to not just take computer's suggestion at every move. The chance that I'll come up with some brilliant move that the computer missed is very slim.
I think I misunderstood what "calculating" means in a chess setting. I thought it meant checking the current position of the pieces and making sure you are not about to be attacked.
But googling it suggests it's more about thinking of the value of each move relative to others. If that's the case I'm not actually bad at that.
> I can tell you that it's extremely hard to not just take computer's suggestion at every move.
Is that how it works? The computer just basically plays and shows you some moves it likes?
That's not what I meant, I was thinking that you tell the computer something like: I want to capture piece X using Y 10 to 20 moves from now, perhaps by going via this direction. Tell me the best series of moves to get there while avoiding traps.
Or even better give it 2 or 3 such scenarios and have it tell you how dangerous each one would be so you can pick one.
Basically really narrow down the permutations the computer has to calculate.
It is. See Centaur chess[1] and correspondence chess. In correspondence chess people don't use standardized hardware (people will probably cheat anyway) but in Centaur chess this can be done.
There is a chess app on IOS that suggests you moves and warns you against obvious situations where you would lose your piece. It's using an old but still popular engine: https://itunes.apple.com/us/app/chess/id522314512?mt=8
I had most fun with this chess engine. Stockfish engine was not fun at all, it was playing like an "asshole" most of the time and when the AI strength is reduced it was like playing against a stupid "asshole". When playing against human you feel like your opponent is on something but against AI you feel like you run for your life. Mind you though, I am very amateur chess player, I play only recreationally.
Computer assisted chess is becoming more popular in human-only tournaments. In that context it's usually called cheating, of course. Ken Regan has developed some interesting statistical methods to detect it, see for example his blog post on Gödel's Lost Letter and P=NP:
Guessing that it's referring to the engine being particularly greedy about material (a notorious trait of computer engines). It's giving up a ton of control on the back rank to defend a crappy doubled pawn. Happy to be proven wrong on this one, just shooting from the hip here.
It's actually not that common. Players will use chess engines to analyze positions, but they will rarely play against them because it doesn't provide realistic practice for an actual chess match.
For example, it may be reasonable to play an aggressive variation against an opponent because you think they might have difficulty finding a response in time pressure, but a computer can make precise calculations in any situation, and so such a strategy almost always backfires.
What's more, chess is often abstracted at higher levels in terms of things like long term plans ("I want to put pressure on the c7 pawn and control c6") instead of concrete material gains, which allows players to make progress even in positions where there are no direct threats and no sensible exchanges of pieces. Computers when faced with such situations will know that the position is objectively drawn and so will just shuffle their pieces around aimlessly, and playing against this kind of thing is not very good practice for actual human opponents who will try to find ways to beat you anyway.
Summary of the paper for those who don't want to read it:
So basically there are two categories of "learning" involved in this sort of research, supervised and unsupervised. In supervised learning, someone gives the computer a long list of concepts and their attributes ("frog", "green frog", "jumping frog") and a set of pictures to go with each item, and feeds them into a visual-recognition algorithm. In unsupervised learning, the computer is given a concept like "frog" but then has to discover all the variations itself and get its own visual data to match.
The claim in this paper is that they have made the unsupervised learning as strong as the supervised learning. That is, they give the computer a concept ("frog"), it goes and searches through Google Books for common variations ("green frog", "jumping frog") and then uses Google image search to fetch images for each of those queries. They can then remove the obvious false positives (they test to see which images seem to screw up their learning algorithm and leave those out), and the result they get is on par with the supervised learning methods.
----------------------
In my opinion, this is only mildly interesting because Google Image Search functions based on human input anyway -- Google knows the difference between a "frog" and a "jumping frog" or even a "camel" simply because people on the internet caption such images and Google can make associations between images and their captions. Essentially, what the researchers have managed to do is outsource the work of some grad student to millions of people around the world through Google.
Of course, it could be argued that there is some sort of parallel with what humans actually do (we know what things are called because we hear other people call them that), but even if I didn't know the name of an animal I could still tell you when the same animal is in different pictures, and I can also tell you when it's jumping and what colour it is. I don't need to have someone caption the image for me to understand the broad range of situations to which the caption "jump" applies.
>I don't need to have someone caption the image for me to understand the broad range of situations to which the caption "jump" applies.
I wonder if this has anything to do with the fact that we can jump too. That we can translate the frog's position into something we do as well.
of course one can argue that we can do the same for non-anthro-moprhic things as well. What i think is that we dont directly relate pictures, as the software is taught. What we do is translate that 2D picture into something we'd see in the 3d world. And that 3d "vision" isn't just another image. It represents an object in our world. something that has shape, existence etc. something which we can observe from other senses as well. For us a picture doesn't always represent an abstract thing, an arbitary pattern of colours. It usually represents something concrete. Something about which we have tons of other pieces of knowledge as well.
So we relate pictures by checking if they map to the same real-world object. And here that "object" is a sort of nexus of many pieces of information we have on it which is a product of many direct and indirect human experiences.
So i don't really think that we are in a position to teach a computer to do anything like that.
> I don't need to have someone caption the image for me to understand the broad range of situations to which the caption "jump" applies.
I'm more interested in metaphor and analogy.
My 3.5 year old son said "look at the rain! It is bouncing like hopping frogs!"
I don't know if he created that. It's not in any of his books. I guess he jumps like a hopping frog at nursery and transferred that to rain.
I'm not so interested in a computer that is trained on frogs, and which sees a hopping frog and describes it as such. If it saw a hopping cat and said this thing is hopping but I don't know what it is, then I'd be interested.
The entire point is that the government can invest in things that it deems valuable but not necessarily likely to make an immediate profit.
The internet is an excellent example. Nobody would have predicted it would have taken off as it did, and so anyone who wanted to make an "internet" would have difficulty accumulating the capital for it. But since the military was able to build it without worrying about future profits, a large-scale and risky project was able to become a success.
The same is true of any long-term research, or any type of project that is primarily a public good. Most scientific research is not done with the explicit intention of "we want to make X", because what "X" is can't be known until you understand what is possible! But since nobody knows what is possible beforehand, investors aren't willing to invest in most research, because the simple advancement of scientific knowledge is something of public value and not private value.
Although I used the word "build", the entire post is talking about the investment. Regardless of whether you consider the people who "built" it to be government employees, the funding was military funding. Things like ARPANET were Department of Defense projects.
> The socioeconomic world is as good as it is because most people are employed, doing something which contributes to maintenance and advancement of society & technology.
Can you justify this? If we use the US as an example and look at the most common jobs, we find that over 4 million people are employed as salespersons, another 3 million as cashiers, followed by 3 million people employed serving food (including fast food), etc. [1]
As the population has been moved away from agriculture, and recently manufacturing, they have become increasingly employed in the service sector. These people are not creating and advancing the technology of tomorrow, they are preforming basic tasks that could easily be done by the customers themselves (eg. automated checkouts).
What's more, there is now millions of people employed in areas such as advertising and sales, where people are essentially tasked with manufacturing wants and increasing the amount of money people spend, which not only defeats the entire basis of a functioning economic system (rational consumers making rational choices), but ensures that additional numbers of people are employed in the manufacture of superfluous goods that people are deceived into buying.
Is there any indication that if we seriously consider what is and isn't necessary in our society, the working population couldn't easily be cut by half without any kind of imminent collapse?
Sure I can justify this. You present 10,000,000 employed people working jobs which could, conceivably, be automated. Do you seriously think that given technology currently applicable, their employers would not replace those people with machines in a heartbeat if the latter were cheaper? Sure, you can hypothesize about how that should be the case, but you would overlook a multitude of realities that indeed render the human worker superior to the automated alternatives.
I dislike human-clerk checkouts at stores. Nothing against people per se, but as a high-tech introvert I'd rather do it myself - exactly as you suggest. Sure, the technology is there; heck, Walmart even has a phone-based self-scanner so you can do 95% of checkout before you even get to the self-serve register for final payment. The technology sounds great on paper and in rhetoric, but in reality it sucks (despite my valiant concerted efforts to be a automation-supporting customer). Standard self-checkout chokes on the bottle of wine ("Human clerk, is customer over 21?" Uh, yeah, I'm greying with two kids in tow, of course I'm of age). Phone-based on-the-go checkout is time consuming (stop, turn on phone (again), tap Scan, point camera, wait for slow auto-focus, wait for crappy in-store wireless connection to function, get to payment station, get selected for another 15-minute "you've been selected for a compliance check" which confuses human staff every time). Never mind shoplifting, mis-scans, and a host of other problems. Those ten million "technology replaceable staff" are still employed because they're better at the work than technology, and relieving them of their duties does NOT make their freed-up wages available for confiscation & redistribution as "living wages" to the now-unemployed former workers (it's going to technology costs and high-tech maintenance staff). That's 10M people "contributing to maintenance of society".
As for manufacturing wants and increasing money spent vs. consumers making rational choices? I'm watching a startup put serious money into marketing staff; the product is [r]evolutionary and WILL "advance society & technology", but isn't going anywhere without convincing a lot of people to buy it, and the staff IS earning their significant wages by doing so. Oh sure, there's a lot of sales of superfluous goods out there, but even that helps fill out & support an infrastructure which gets vital goods to a broad clientele: Walmart isn't going to get five pounds of bread flour on a shelf for $1.89 without the delivery system greased with the profits from the "cheap crap" they're famous for; in comparison, that same sack of flour would cost about $5 at everything-is-perfect Whole Foods.
Indication that considering what is and isn't necessary in society could "liberate" half the population? Yeah: every society that tried it, like the Soviet Union (hint: they'd kill people for trying to leave).
I gave an example of an automated checkout, but my point was not that the jobs I mentioned could all be automated -- my point was that they are not necessary. When I say "necessary", I don't mean that they aren't necessary to increase corporate profits (salesman, which I mentioned, certainly are), I mean they are not necessary to ensure a stable and functioning society. The example I gave with automated checkouts is not to demonstrate that automated checkouts are _better_ than human workers -- human workers are and will likely be far more capable than machines at running checkouts for a long time. My point is that they _could_ be replaced, and that if society decided that increasing human liberty was a more important goal than certain small inconveniences at the checkout, they _would_ be replaced.
So yes, these people are "contributing to the maintenance of society", but there are conceivable alternatives that would give these individuals (and society as a whole) more liberty, and would only require some small inconveniences.
The anecdote you give of your startup which needs advertising to get off the ground is beside the point. Perhaps your startup is revolutionary, and perhaps it needs some advertising to get off the ground, but this doesn't change the fact that most advertising is simply misinformation. Television commercials which make use of lush landscapes and half-naked women to sell cars don't create rational consumers, and without rational consumers you cannot have a functioning market. If advertising was simply a way in which companies communicated well-reasoned facts about products, and came with a balanced analysis of a product and its competitors, then we could argue that advertising was working towards creating a functioning market system. Until then, advertising will simply favour those with the largest advertising budgets and those who are best at disinformation, and so it's difficult to argue it is contributing to society.
> Indication that considering what is and isn't necessary in society could "liberate" half the population? Yeah: every society that tried it, like the Soviet Union (hint: they'd kill people for trying to leave).
This is really cheap rhetoric -- and not even accurate. If I'm arguing for a society where people are liberated from work, why are you using the Soviet Union, a dictatorship where everyone worked all the time, as some kind of counterpoint? Not everything that contradicts the status-quo is totalitarian communism, you know.
Besides, the points I'm trying to make aren't even original. There are serious proposals that have been made for why society should move to a 20-hour work week to address things ranging from rising levels of depression to climate change. [1]
Simple wrap-up: society won't move to a 20-hour work week because those who do will become jealous of those who don't, desiring unto "necessity" those things the 40- (and 60-, and 80-) hour worker can afford.
You can live on a very very small income right now. I figure an intelligent frugal life can suffice at $10/day. But you don't, because you won't give up what you don't need.
It seems likely that someone involved with the contest is trying to use it for free advertising. Look at the "About" page on that flash app, the person apparently lives in Waterloo, and his email has "uwaterloo" in it.
There is supposed to be a new version of this contest coming up soon, although it seems to be temporarily stalled as the main developers are busy with other things.
I don't think anyone posting about how easy it is to circumvent the paywall understands the point: it's not targeted at you. The NYT knows that if they create a completely restrictive paywall they will die, so they're trying to let just enough people in and annoy them just enough to get people to pay for their service. Whether or not somebody with basic technical knowledge can bypass it is not the point. After all, they don't even ask you to register an account and yet they expect to stop you from reading x numbers of articles per month -- why would anyone expect this to be secure?
When Microsoft itself started a campain against IE6 (http://www.ie6countdown.com/) users should do a small lookup over all those other updated browsers for XP.
I still don't understand why Google hasn't done the exact same test with a control variable. Run the exact same test again on a domain that isn't google.com.
There are three reasons for Google not to report results of a control-case test:
1) It didn't occur to them to run a control in their experiment.
2) It did occur to them but they decided not to do it for some reason.
3) They did run the control but it did not bolster their case.
As you point out, if they ran it and it bolstered their case there would be no reason not to report it. Of the above reasons, #2 seems the least likely (because it's easy to set up a control and it would better clarify the situation). I do not have enough evidence to judge whether #1 or #3 is more likely.
https://www.youtube.com/watch?v=pNvVWeHZG00
As you can tell from his commentary, he thinks the machine is rather stupid (and he is winning after the opening) but the machine is a much better calculator than he is and when the situation becomes more concrete he has to force a draw.
Of course, the computer on his phone is considerably worse than the best chess engines, but top chess players generally consider computers to be excellent calculators but dumb in terms of general strategy.