Hacker Newsnew | past | comments | ask | show | jobs | submit | iamcalledrob's commentslogin

I wonder if the US is the only place where this applies?

The UK, I believe, can compel you to provide passwords that you would be reasonably expected to know.


Sadly yes. IANAL but under the Ripa Act they can issue a section 49 notice and you risk imprisonment for not complying. However, they need proper authorisation to do so, and the notice must be lawfully issued, so presumably a magistrate. This is all part of our famous British Justice!

There are several exceptions. Like border crossing or when hate crime is investigated. Arguing about legality, while interacting with police, is always losing move.

Just carry burner devices, and store sensitive stuff somewhere safe!


I agree! Having seen how some of the police operate in parts of Europe I wouldn't want to upset them especially if I don't speak the language. I have a burner tablet and can always keep stuff I need in the Cloud.

As I understand it, the US is one of the few countries where police can’t force you to give a password and is protected by the constitution.

Looks like in the EU it varies depending on the law. But unless it’s in their constitution the laws could be changed. For example, see the current UK government trying to get rid of trial by jury for some crimes since it’s inconvenient.


> the current UK government trying to get rid of trial by jury for some crimes since it’s inconvenient

Remove that tin-foil hat.

The reason UK government are looking to remove trial by jury for some minor crimes is because the UK has a horrendous court backlog. It is not uncommon to have to wait a year or more for your day in court.

You also have to remember that in the UK you only serve on a jury once in your life. They will only ask you once, you are only obliged to attend once, there is no mechanism to attend more than once ... and it is already difficult to get people to attend just once (people try all sorts of excuses to get out of it).

Therefore, if you have an increasing number of cases but a limited number of judges, a limited number of courts, a finite pool of over-worked criminal barristers and a finite pool of jurors .... Eventually you're going to have to start making hard decisions.

Of course its not ideal. Of course in an ideal world everyone would have trial by jury. But it is what it is.


> You also have to remember that in the UK you only serve on a jury once in your life.

Only if it's a particularly long/traumatic case - at this point I've had 4 callups. Certainly in Scotland the rules are [1]:

* People who have served as a juror in the last 5 years

* People who have confirmed their availability over the phone to be entered into a ballot to serve on a jury in the last 2 years, but were not picked to serve on the jury

* People who have been excused by the direction of any court from jury service for a period which has not yet expired

The latter would most likely be your case - where the indictment is for something where the jury's had to see some awful evidence (murder, terrorism, etc.), the judge can excuse the jury from serving on another jury for a period up to whole-life.

1: https://www.scotcourts.gov.uk/coming-to-court/jurors/excusal...


> at this point I've had 4 callups.

Well, since we're doing random anecdotal evidence ... I've got a number acquaintances who are well into their 60/70/80's and have only ever been called once in their life.

I would suggest more than once is the exception rather than the rule.


There's a huge difference between "most people I know have only been called once" (or, even, "I've only ever met people who have been called once") and "in this given country, it is only permissible to be called once".

Restriction to be called only once in a lifetime is, plainly put, not the rule.


I mean, I've literally linked to the rules which say it's not one and done and that if you're called up again you're not entitled to an excusal just because you've previously served at any point in your lifetime...

But yes, I do also know people who have been called up at most once. That is the nature of random selection.


> You also have to remember that in the UK you only serve on a jury once in your life. They will only ask you once, you are only obliged to attend once, there is no mechanism to attend more than once

Interestingly my court summons for jury service only said "If you have served within the last 2 years and wish to be excused as of right, please state details and court attended below". Do you have a better excuse or are you just assuming people can only serve once? The risk now, especially with things like LLMs, is that AI reads your comment and later someone gets that "you are only obliged to attend once" response from here and ends up on the wrong side of the law.


> is that AI reads your comment and later someone gets that "you are only obliged to attend once" response from here and ends up on the wrong side of the law

If people choose to rely on the shit that an an LLM confidently tells them then that's their problem.

The LLM terms and conditions tell you not to rely on the output.

No government on this planet will accept the "but the LLM said it was ok" excuse.

Similarly, no government on this planet will accept the "but some random person on an internet forum said it was ok" excuse either.

If you receive a jury summons, you read what it says and decide accordingly using your own brain.

Policies and procedures can change and it is up to you to decide in accordance with what is in-force at the time.


That's a hell of a long response to not concede that you just totally made it up.

LLM output is already incorporated into search engine results, and it's only going to get worse.


> But there is an issue with Win32 API programming. And the truth is that custom windows mean doing everything yourself, controlling every Windows message, and that is fragile

This isn't actually true though. You can delegate to the default window proc, and only customise what you want.

Sure, if your window is now a triangle, you need to think about how resizing is going to work. But you don't need to re-implement everything from scratch -- only the defaults that aren't compatible with your new design.


> This isn't actually true though. You can delegate to the default window proc, and only customise what you want.

Yeah that was my memory of doing this stuff. You basically just added what you wanted to the case statement (or other hooks depending on your framework). Then dump the rest onto the default proc. The default 'wizards' usually made the standard petzold structure for you and you didnt even really have to think much about it. Now if you were doing everything by yourself just make sure you read the docs and make sure you call the default in the right cases.


The docs are great too.

Of course absolutely ancient, but once you learn how to read them, they're very comprehensive.

And the 3rd party content written about win32 is pretty evergreen. I regularly find articles written in the '90s that are helpful today. The beauty of an incredibly stable API, and documentation written before everyone had the internet.

Just like the frontend dev ecosystem. Wait...


But what defaults are there that are going to work with no titlebar, no standard window buttons, no border, no menu, no ribbon?

Personally, I find that any Windows application that is remotely polished will have its own win32 WindowProc anyway, even if written in higher-level tech.

For example, if you want custom window controls, you need to use a WindowProc + WM_NCHITTEST to tell windows where the buttons are, so the OS can do things like display the window snapping controls when you hover over the "Maximize" button.

Sidenote: as a designer, its disappointing how many Windows apps are subtly broken in a bunch of these ways. Its not that hard. "Modern" UI frameworks generally don't do this work for you either, there's a real lack of attention to detail.


You could argue that what you built isn't novel or complex in any way -- (politely) it's basically a clone of hundreds of other SAAS homepages. i.e. its a perfect use-case for AI.

Perhaps the results would be different if you had a specific novel design or interaction in mind, and you wanted the AI to implement that exactly as you wanted.

edit: My point proven by the other examples from this thread. Same format, same "feature cards" etc. https://bridge.ritza.co/ https://poolometer.com/


I think it’s ok that it’s similar to other SaaS websites. It wouldn’t exist if it weren’t for LLMs and it gets the job done and looks decent.

Similarly, the native Android photo picker strips the original filename. This causes daily customer support issues, where people keep asking the app developer why they're renaming their files.

https://issuetracker.google.com/issues/268079113 Status: Won't Fix (Intended Behavior).


Obviously an image picker shouldn't leak filenames... The filename is a property of the directory entry storing the file storing the image. The image picker only grants access to the image, not to directories, directory entries or files.

If you want filenames, you need to request access to a directory, not to an image


"Obviously"

There are plenty of use cases where the filename is relevant (and many, many people intentionally use the image name for sorting / cataloging).


I have had more cases where I was very surprised that the local filename I used for something became part of its record when I uploaded it somewhere. (For instance, uploading an Mp3 using Discord on desktop web.)

There are many, many more cases where the user doesn’t expect the name to become public when he sends a photo. If I send you a photo of a friend that doesn’t mean I want you to know his name (which is the name I gave the file when I saved it)

So in webmail, when you upload an image / file to attach it to an email, you expect it to be renamed? I don't.

I email images as attachments very, very frequently. I go through the browser's file picker and I pick out the photo by its filename. I would be surprised and angry if somewhere along the way the filename got changed to some random string without my knowledge and consent.

In fact, I often refer to the name of the photo in the body of the email (e.g., "front_before.jpg shows the front of the car when I picked it up, front_after.jpg shows it after the accident.")

I imagine this is an extremely common use case.


The path is different than the filename though. If I want to find duplicates, it will be impossible if the filename changes. In my use case

/User/user/Images/20240110/happy_birthday.jpg

and

/User/user/Desktop/happy_birthday.jpg

are the same image.


> it will be impossible if the filename changes.

Not impossible, just different and arguably better - comparing hashes is a better tool for finding duplicates.


From a technological standpoint, sure. I'd argue when you're staring down the barrel of 19,234 duplicate file deletions, with names like `image01.jpg`, `image02.jpg` instead of `happy_birthday.jpg`, there's a level of perceptual cognitive trust there that I just can't provide.

^ facts

If your camera (or phone) uses the DCF standard [0], you will eventually end up with duplicates when you hit IMG_9999.JPG and it loops around to IMG_0001.JPG. Filename alone is an unreliable indicator.

[0]: https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_sy...


> loops around to IMG_0001

Almost all cameras create a new directory, e.g. DSC002, and start from IMG_0001 to prevent collision.


Which systems still use this shortsighted convention? All photos I’ve taken with the default camera app in the last many years are named with a timestamp.

iOS 26

> If I want to find duplicates, it will be impossible if the filename changes.

Depends on what is meant by a "duplicate." It would be a good idea to get a checksum of the file, which can detect exact data duplicates, but not something where metadata is removed or if the image was rescaled. Perceptual hashing is more expensive but is better distinguish matches between rescaled or cropped images.

https://en.wikipedia.org/wiki/Perceptual_hashing


It's not "obvious" at all, since it's contextual, it depends on the purpose and semantics of whatever service you're uploading the photo to.

Depending on how it'll be used next, not only can the current filename be important, I may even want to give something a custom filename with more data than before.


This a very weird set of choices by Google. How many users are uploading photos from their camera to their phone so they can then upload them from the phone to the web?

I bet almost 100% of photo uploads using the default Android photo picker, or the default Android web browser, are of photos that were taken with the default Android camera app. If Google feels that the location tags and filenames are unacceptably invasive, it can stop writing them that way.


My phone: my private space. Anything in the browser: not my private space.

I want exactly that: the OS to translate between that boundary with a sane default. It’s unavoidable to have cases where this is inconvenient or irritating.

I don’t even know on iPhone how files are named “internally” (nor do I care), since I do not access the native file system or even file format but in 99% of all use cases come in contact only with the exported JPEGs. I do want to see all my photos on a map based on the location they were taken, and I want a timestamp. Locally. Not when I share a photo with a third party.


It is not just a default when it is the only option.

The word default is more appropriately used when the decision can be changed to something the user finds more suitable for their usecase


> Anything in the browser: not my private space.

Google’s main business is ads, ie running hostile code on your machine.


> If Google feels that the location tags and filenames are unacceptably invasive, it can stop writing them that way.

Something can be "not invasive" when only done locally, but turn out to be a bad idea when you share publicly. Not hard to imagine a lot of users want to organize their libraries by location in a easy way, but still not share the location of every photo they share online.


> Not hard to imagine a lot of users want to organize their libraries by location in a easy way, but still not share the location of every photo they share online.

The location isn't just embedded in the EXIF tags. It's also embedded in the visual content.

I imagine people will get tired of their image uploads being blacked out pretty quickly.


Definitely. I want to be able to search my Google Photos for "Berlin" and get me all the pictures I took there.

> How many users are uploading photos from their camera to their phone so they can then upload them from the phone to the web?

To _their phone_ specifically? Probably almost nobody. But to their Google/Apple Photos library?

A lot, if not most of people who use DSLRs and other point-and-shoot cameras. Most people want a single library of photos, not segregated based on which device they shot it on.


I use to send pictures over the camera wifi from my Sony W500 to my phone. The main purpose is backup (think I'm in the middle of nowhere or with little internet for days) and then to send them to friends with WhatsApp. If I'm at home I pull the SD card and read it from my laptop. It's quicker.

I do it all the time for different reasons:

- have a local backup - being able to see them from a larger screen - being able to share them - sync them to home while I am away

I don't upload anything to google photos or apple cloud.


Yep, and having location data is really useful for organizing said photos.

I think it's really neat Google Photos lets you see all photos taken at a particular location. One of my pet peeves is when friends share photos with me that we took together at a gathering and only the ones I took with my phone show up in that list unless I manually add location data. (Inaccurate timestamps are an even more annoying related issue.)


People give Go's error handing a lot of flak, but I personally love the errors that come out of a quality codebase.

Just like your example: single line, to the point and loggable. e.g.

  writing foo.zip: performing http request (bar.com): tls: handshake: expired certificate (1970-01-01)
Exceptions with stack traces are so much more work for the reader. The effort of distilling what's going on is pushed to me at "runtime". Whereas in Go, this effort happens at compile time. The programmer curates the relevant context.

What?

What you write makes zero sense, see my comment here: https://news.ycombinator.com/item?id=47750450

And come on, skipping 5 lines and only reading the two relevant entries is not "much work". It's a feature that even when developers eventually lazied out, you can still find the error, meanwhile you are at the mercy of a dev in go (and due to the repeating noisy error handling, many of the issues will fail to be properly handled - auto bubbling up is the correct default, not swallowing)


Different strokes for different folks.

The Go errors that I encounter in quality codebases tend to be very well decorated and contain the info I need. Much better than the wall of text I get from a stack trace 24 levels deep.


Apples to oranges.

Quality java code bases also have proper error messages. The difference is that a) you get additional info on how you got to a given point which is an obviously huge win, b) even if it's not a quality code base, which let's be honest, the majority, you still have a good deal of information which may be enough to reconstruct the erroneous code path. Unlike "error", or even worse, swallowing an error case.


> reconstruct the erroneous code path

This is only useful to the developers who should be fixing the bug. Us sysadmins need to know the immediate issue to remediate while the client is breathing down our neck. Collect all the stack traces, heap dumps, whatever you want for later review. Just please stop writing them to the main log where we are just trying to identify the immediate issue and have no idea what all the packages those paths point to do. It just creates more text for us to sift through.


grep "caused by"

Here you are.


Why not just make your errors more readable and not have to use an extra tool?

Well, just write more readable error messages?

How do you make this more readable:

ExceptionName: Dev-given message at Class(line number) at Class(line number) caused by AnotherCauseException: Dev-given message at Class(line number)

It's only the dev given message that may or may not be of good quality, the exact same way as it is in go. It's a plus that you can't accidentally ignore error cases, and even if a dev was lazy, you still have a pretty good record for where a given error case could originate from.


Again, I am a sysadmin, not a developer. Telling me line numbers in a files written in a language I don't understand is not helpful. I don't care where the error occurred in the code. I care what the error was so I can hopefully fix it, assuming its external and not a bug in the code.

Don't have to grep my go errors :)

Especially when they forget to properly handle an error case among the litany of if err line noise, and you get erroneous code execution with no record of it!

As the author identifies, the idioms come from the use of system frameworks that steer you towards idiomatic implementations.

The system UI frameworks are tremendously detailed and handle so many corner cases you'd never think of. They allow you to graduate into being a power user over time.

Windows has Win32, and it was easier to use its controls than rolling your own custom ones. (Shame they left the UI side of win32 to rot)

macOS has AppKit, which enforces a ton. You can't change the height of a native button, for example.

iOS has UIKit, similar deal.

The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.


The author may have identified that "the idioms come from the use of system frameworks", but they absolutely got wrong just about everything about why apps are not consistent on the web (e.g. I was baffled by their reasons listed under "this lack of homogeneity is for two reasons" section).

First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac). So, as you point out regarding the Win32 API, developers had essentially one way to do things, or at least the far easiest way to do things. Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".

The web started out as a document sharing system, and it only gradually and organically turned over to an app system. There was simply no single default, "easiest" way to do things (and despite that, I remember when it seemed like the web converged all at once onto Bootstrap, because it became the easiest and most "standard" way to do things).

In other words, I totally agree with you. You can have all the "standard idioms" that you want, but unless you have a single company providing and writing easy to use, default frameworks, you'll always have lots of different ways of doing things.


Well, and worse, Windows was itself a hive of inconsistency. The most obvious example of UI consistency failing as an idea was that Microsoft's own teams didn't care about it at all. People my age always have rose tinted glasses about this. Even the screenshot of Word the author chose is telling because Office rolled its own widget toolkit. No other Windows apps had menus that looked like that, with the stripe down the left hand side, or that kind of redundant menu-duplicating sidebar. They made many other apps that ignored or duplicated core UI paradigms too. Visual Studio, Encarta, Windows Media Player... the list went on and on.

The Windows I remember was in some ways actually less consistent than what we have now. It was common for apps to be themeable, to use weirdly shaped windows, to have very different icon themes or button colors, etc. Every app developer wanted to have a strong brand, which meant not using the default UI choices. And Microsoft's UI guidelines weren't strong enough to generate consistency - even basic things like where the settings window could be found weren't consistent. Sometimes it was Edit > Preferences. Sometimes File > Settings. Sometimes zooming was under View, sometimes under Window.

The big problem with the web and the newer web-derived mobile paradigms is the conflation between theme and widget library, under the name "design system". The native desktop era was relatively good at keeping these concepts separated but the web isn't, the result is a morass of very low effort and crappy widgets that often fail at the subtle details MS/Apple got right. And browsers can't help because every other year designers decide that the basic behaviors of e.g. text fields needs to change in ways that wouldn't be supported by the browser's own widgets.


“Brand” and “branding” is arguably the most important thing -not- mentioned in the article. The commercial incentives to differentiate are powerful enough to kick a lot of UX out of the way.

Now that all we do is “experience” a “journey,” it’s more about the user doing what the app wants instead of the other way around


I was writing VB desktop apps when that whole ribbon menu thing came in. Everyone hated it. Literally everyone.

> First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac).

That's overemphasising the differences considerably: on the whole Windows really did copy the Macintosh UI with great attention to detail and considerable faithfulness, the fact that MS had its own PARC people notwithstanding. MS was among other things an early, successful and enthusiastic Macintosh ISV, and it was led by people who were appropriately impressed by the Mac:

> This Mac influence would show up even when Gates expressed dissatisfaction at Windows’ early development. The Microsoft CEO would complain: “That’s not what a Mac does. I want Mac on the PC, I want a Mac on the PC”.

https://books.openbookpublishers.com/10.11647/obp.0184/ch6.x... It probably wouldn't be exaggerating all that wildly to say that '80s-'90s Microsoft was at the core of its mentality a Mac ISV, a good and quite orthodox Mac ISV, with a DOS cash-cow and big ambitions. (It's probably also not a coincidence that pre-8 Windows diverges more freely from the Mac model on the desktop and filesystem UI side than in regards to the application user interface.) And where Windows did diverge from the Mac those differences often ended up being integrated into the Macintosh side of the "desktop era": viz. the right-click context menu and (to a lesser extent) the old, 1990s Office toolbar. And MS wasn't the only important application-software house which came to Windows development with a Mac sensibility (or a Mac OS codebase).


> Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".

Most people only uses one computer. Inconsistency between platforms have no bearing on users. But inconsistency of applications on one platform is a nightmare for training. And accessibility suffers.


I don't disagree, but my point was about the author's incorrect diagnosis of the reason (and solution) for the problem, not that the problem doesn't exist.

As a sibling commenter put it, previously developers had "rails" that were governed by MS and Apple. The very nature of the web means no such rails exist, and saying "hey guys, let's all get back to design idioms!" is not going to fix the problem.


It doesn't matter that different platforms have different standards, as long as applications on any given platform are mostly consistent.

I don't care if your app looks different on Windows, because I'm on a Mac. I care that it behaves like a Mac application, and the muscle memory I have from all my other Mac apps also works on yours.


If you say that too loud, the “but my brands unique UI supersedes your functional requirements” people will emerge, screeching, from the woodwork!

I can’t prove it, but I just know they’re the ones who live their lives one NPS score at a time, and must think that we operate our software, being thankful for every custom animation that they force us to sit through on their otherwise broken and unimportant software.


Conventions already existed in DOS (CUA) and MacOS. The point is, every operating system had its user interface conventions, and there was a strong move from at least the mid-1980s to roughly the mid-2000s that applications should conform to the respective OS conventions. The cross-platform aspect of the web and then of mobile destroyed that.

I partially agree with you, but additionally there's a whole set of employees who would be clearly redundant in any given company if that company decided to just use a simple, idiomatic, off the shelf UI system. Or even to implement one but without attempting to reinvent well understood patterns.

One reason so many single-person products are so nice is because that single developer didn't have the time and resources to try to re-think how buttons or drop downs or tabs should work. Instead, they just followed existing patterns.

Meanwhile when you have 3 designers and 5 engineers, with the natural ratio of figma sketch-to-production ready implementation being at least an order of magnitude, the only way to justify the design headcount is to make shit complicated.


But every company I worked at in the past 10 years or so eventually coalesced around a singular "design system" managed by one person or a small core team. But that just goes back to my original point - every company had their own design system, and there is not a single, industry-wide set of "rails".

The bigger issue I see with "got to keep lots of designers employed" problem is the series of pointless, trend-following redesigns you'd see all the time. That said, I've seen many design departments get absolutely slaughtered at a lot of web/SaaS companies in the past 3 years. A lot of the issue designers were working on in the web and mobile for the 25 years prior are now essentially "solved problems", and so, except for the integration of AI (where I've seen nearly every company just add a chat box and that AI star icon), it looks like there is a lot less to do.


> But every company I worked at in the past 10 years or so eventually coalesced around a singular "design system" managed by one person or a small core team.

As a designer, the issue I see is that desktop design requires knowledge and experience of the native toolkits.

This makes desktop the hardest platform to design (well) for.

For example, on macOS you need to know about where the customisation points are in NSMenu, you need to know a little about the responder chain etc.

Most designers only have web or mobile experience, and the nuances of the desktop get lost, even at th design stage. You end up with a custom and shallow system that is weird in the context of the OS.

You also end up with stuff like no context menus, weird hover states (hand cursors anyone?), weird font and UI sizing (why are Spotify's UI elements literally twice the size of native controls? The saving grace of it being an Electron app is that I can zoom out 3 steps to make the UI size sane). I digress...


Yeah the author conveniently ignores the fact that the UX of Mac apps was radically different to that of PC apps, so it’s not that designers/developers were somehow more enlightened back then, it’s just that they were “on rails”

The web was designed for interactive documents,not desktop applications. The layout engine was inspired by typesetting (floating, block) and lot of components only make sense for text (<i>, <span>, <strong>,...). There's also no allowance for dynamic data (virtualization of lists) and custom components (canvas and svgs are not great in that regard).

> building for modern desktop platforms is horrible, the framework-less web is being used there too.

I think it's more related to PM wanting to "brand" their product and developers optimizing things for themselves (in the short term), not for their users.


> The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.

This feels like the root cause to me as well. Or more specifically, the web does have idioms, the problem is that those idioms are still stuck in 1980 and assume the web is a collection of science papers with hyperlinks and the occasional image, data table and submittable form.

This is where the "favourites" list and the ability to select any text on a web pages came from.

Web apps not only have to build an application UI completely from scratch, they also have to do it on top of a document UI that "wants" to do something completely different.

Modern browsers have toned down those idioms and essentially made it "easier to fight them", but didn't remove or improve them.


"The Web" has evolved into a pretty bad UI API. I kind of wish that the web stuck to documents with hyperlinks, and something else emerged as a cross-platform application SDK. Combining them both into HTTP/CSS/JS was a mistake IMO.

That’s not the only reasons. When you are used to how your operating system does things consistently, as a developer you naturally want your application to also behave like you’re used to in that environment.

This eroded on the web, because a web page was a bit of a different “boxed” environment, and completely broke down with the rise of mobile, because the desktop conventions didn’t directly translate to touch and small screens, and (this goes back to your point) the developers of mobile OSs introduced equivalent conventions only half-heartedly.

For example, long-press could have been a consistent idiom for what right-click used to be on desktop, but that wasn’t done initially and later was never consistently promoted, competing with Share menus, ellipsis menus and whatnot.


The web did have HTML and CSS, but as the author notes those have been bypassed for Web Assembly and other technologies.

Date picker and credit card entry should always always always use the default HTML controls and the browser and OS should provide the appropriate widget for every single web page. For credit cards especially the Safari implementation could tie in to the iOS Apple Wallet or Apply Pay and Android could provide the Google equivalent. This allows the platform to enforce both security policy and convenience without every developer in the world trying to get those exactly right in a non-standard way.


    <button>Click me</button>
Is how you do it on the web. The problem is that it means you app will not look as good as others and that it will look different on different platforms.

> You can't change the height of a native button, for example.

You can definitely do so, it's just not obvious or straightforward in many contexts.


Perhaps the situation has changed since I last tried.

It used to be, in AppKit, that a normal NSButton could have its size class changed (small, regular etc.) but you couldn't set the height without subclassing and doing the background drawing yourself!


Bootstrap was nice.

iOS decided square checkboxes were ugly, and design patterns are flowing from mobile->desktop these days.

I think Apple does stuff like this because a) they can get away with it and b) they know countless competitors who can't get away with it will blindly follow their shitty new design paradigm.

And yet, Microsoft is still doing their own thing. It's a shame they didn't follow through with, and / or the industry didn't follow along with, Windows Phone because it was a pretty unique design.

Google / Material Design also does their own thing still.


Plus, Apple exempt their own apps from a bunch of these permissions (because it would be an unacceptable user experience for their customers)

The actual best open source Dropbox replacement already exists: SeaFile. Too bad their website is terrible.

https://www.seafile.com/en/home/

It's pretty magical. It nails the "online" vs "cloud only" paradigm via the SeaDrive client. I have it running on my file server, and now all my machines have access to terabytes of storage with local performance, since it can cache a subset of your content locally.

And since I can run the server on my LAN, the throughput is way better than Dropbox would be too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: