Hacker Newsnew | past | comments | ask | show | jobs | submit | more _cbdev's commentslogin

I've had mail sent via my own server rejected by gmail because of a missing Message-ID header. The 550 reject message was the standard "Unsolicited Mail detected" text, the same mail was accepted without causing any fuzz once the Message-ID was added.


Why on earth wouldn't your mail server add that for you?


Because I'm writing my own ;)

https://github.com/cmail-mta/cmail if you're interested.


FWIW I enjoyed the challenge of hacking up my own version of this. It's admittedly not battle-tested, but hey ;)

https://github.com/cbdevnet/libgit2-backends/blob/master/pos...


Awesome!

Incredible to get two implementations.

I really appreciate that you've made this happen.

Is this built from scratch or a port of one of the existing drivers? Did you have a chance to have a look at the other implementation - what do you see as the major differences?

Any thoughts of submitting to the libgit2 project?


I copied the prototypes and general structure of the driver from the existing ones (mainly the sqlite driver), implementing the functionality with the appropriate postgres API calls.

As for the implementation by mhodgson, one thing I noticed is that it uses the slightly newer PQexecPrepared API, which only works from protocol version 3.0 and upwards (which was introduced with PostgreSQL 7.4, so not really a problem). He also seems to use some more calls to the libgit2 API, mainly for copying and creating buffers for SQL statements (which I did with static allocation, at the expense of having hardcoded schema and table names).

All in all, mhodgson's implementation seems more feature-complete (and well-tested) than mine, yet also considerably more complex.

As for submitting it, I'd like to have some confirmation that it works first :) Unfortunately, I'm currently too busy to really set up some tests for it, but once I have the time, I just might ;)


Gah! You beat me to it :)


I decided to do it just for the fun of it.

https://github.com/cbdevnet/libgit2-backends/blob/master/pos...

Caveat Emptor: I did not test this since somehow the Debian package of libgit2-dev seems not to include the GIT_* constants. It should probably work, though.


> Mail services are usually made far more complicated than they should be, and I understand there are a lot of desired features...

That was my impression of the whole thing, too. I've long had an exim configuration I could decorate my walls with, without understanding what most of it did or if it was secure.

Recently, I got so fed up I began writing my own mail server suite. It's still pretty basic and in development, but it does have some of the features you mention, namely

> SMTP

> POP3

> WebMail (though rudimentary)

> WebAdmin

> Multi-Domain support

In the pipeline, but not yet ready

> IMAP

> Plugins

Some of the goals of the project are to have a mail processing suite with a clear interface between the modules, as well as easy extensibility and configuration.

Me and some people I've talked into testing it already run some instances, and so far it has proved pretty stable.

Caveat: The backend is an SQLite database, so if your use-case is serving a lot of clients, there might be some lock contention.

If you're interested, check out https://github.com/cmail-mta / http://cmail.rocks/


Wow, exactly what I was looking for. Keep up the good work! I will follow the progress, as I am interested in the upcoming IMAP support.


If you separate each account's folders/inbox into a separate sqlite database, with the accounts/configuration in another, it probably wouldn't be to bad at even moderate scale.

But for < 100 users one sqlite db would probably be sufficient. (on a relatively fast drive/ssd)


That's actually exactly what you can already do with cmail ;)

Each user can optionally be assigned a "user database", storing only the mails in her own inbox (which also allows users to have direct control over their own mail database).

If this is not used, mail is stored in the master database.

As you said, most normal deployments should not run into those limits, but its worth keeping them in mind.


Why not use Maildir, or really any other standard scalable mail storage format? The most annoying thing in the universe is trying to export mail from one client to another with incompatible formats. Well, okay, maybe Vogon poetry is worse, but e-mail is a close second.


Well one reason being I really like SQLite.

SQLite has a pretty great C API making it really easy to programmatically store, modify and retrieve data, which made it my first choice for runtime configuration data. Adding another storage backend for the mail data would have meant a lot more code paths to support.

SQLite also has a lot of tools surrounding and supporting it (I particularly like SQLiteStudio) which allows querying/modifying/analyzing your mails with a huge number of programs, and using SQL - something I've found really useful!

With a sufficiently sane schema (which I hope we've achieved), transforming from the database format to any other format becomes really easy!

Also, since cmail is not really being developed with the intent to have people access their mail with a shell account (the main reason being that cmail users should not have to map to system users), supporting access formats on the server side is less of a priority - preferred access is via POP (already available) and IMAP (once it's done). The user database feature allows cmail users that ARE system users to control their own database.

An exporter to convert to/from Maildir is a planned feature :)


Hear-hear. Aside from a big directory of ascii text files, I can't think of a more future proof method of storage. So sqlite has that going for it, in addition to the extension API - which is awesome. I recently had to write some integration software for Kronos and some no name security system, as I was already storing the data in sqlite I decided to handle the Kronos business logic with a loadable extension - it was a pretty pleasant experience. Actually, I'm pretty sure there is an extension out there that allows you to use a directory of CSV files for table storage...


Did you have a look at http://dbmail.org/ ? And/or http://aox.org/ (Archiveopteryx) ?

I've been thinking of using one of their schemas as a base for doing something similiar for an email client/interactive mail archive type thing (think: hyperkitty/pipermail).

Also notmuch has done some work for mapping email to xapian for search/tagging/indexing -- could also be a good source of inspiration I think.


I was aware of dbmail, but did not peruse the source in-depth, in part because it does not in itself implement SMTP.

Archiveopteryx I did not know, but thanks for the tip!

Using their databases as the basis for new tools has the obvious benefits of cross-compatibility, though there's always the drawback of database cruft that accumulates by virtue of differences in implementation or project goals.


Definitely a great and very instructive read, and one of the few to include IPv6 and the correct usage of getaddrinfo.

It's the guide I recommend to everyone who wants to learn about network programming in C. I actually purchased the (slightly expensive) printed version and never regretted it :)

http://www.lulu.com/shop/brian-hall/beejs-guide-to-network-p...


Why is $20 "slightly expensive" for this book? Or was it not $20 when you bought it?


Adding Shipping Costs to Germany bumps the price to about 22 USD (currently, 21EUR is what the receipt says). Considering the number of pages in this book in relation to its price, it is slightly expensive compared to your average "normal" book.

When comparing to other education text books though I'd not be able to complain. Neither am I, as I still think it was worth it, if just to support the author and to have an addition to the bookshelf.


The getaddrinfo bit was a lifesaver recently, when i suddenly had to implement a network client in C for the first time in years. Long live Beej!


That already happened.

See http://www.iana.org/domains/root/db


I am amazed no one suggested a simple GPG operation.

Generate a keypair on an airgapped machine, keep the private part on a secure external medium (eg. CF Card in a bank safe).

Have the public key on your normal machine, write your journal in normal text files (use a ramfs if you worry about it being restored by forensics) and encrypt against the public journal key. Decryption is only possible with the private part, the public part can even be, well, public. :)


I especially like the absurdity presented in this diagram:

http://sshkeybox.com/img/keybox_dia.jpg

Not only does it create a single point of failure in the Administration, but the third case illustrates that this is some kind of feature, advertising blocking normal SSH traffic to the server network, instead replacing it with HTTPS traffic.

Please don't do this. Just accept SSH and learn to deal with it. If you can't, maybe systems administration is not for you and you should pay someone to do it.


It's definitely not for everyone. Inbound/outbound SSH is usually blocked on corp networks. One of the reasons is you can tunnel/forward ports and expose the internal network. HTTPS takes that away. Plus you can't copy files off the server and the idea is you can audit what is being done. Depends on what the threat is in IMHO.


Corp networks are almost never a shining example of network design or security. Most people that work doing that sort of thing are the checkbox type and will block SSH (and other protocols) because it gives them some sense of warm and fuzzies.

The reality is this is stupid and just leads to people working around the problem. Take for example corkscrew, which allows you to tunnel SSH over HTTPS proxies, without losing any of it's port forwarding or other juicy features.

If you feel like you really need to embarass them you can ofcourse run something like sshuttle over the top providing automatic VPN routing over this rogue SSH tunnel.

Worse yet they might install KeyBox on their AWS deployment because they don't have SSH outbound, which then promptly gets pwned because it's a relatively unproven Java web application rather than a battle-hardened unix staple (i.e sshd).

Corp networks and infosec departments are a joke.

In terms of auditing all you need to do is make proper use of *nix permission model and then enable sudo input/output logging and control access to sudo via LDAP groups. Send all of your sudo logs to centralized logging system like Splunk/ELK/fluentd/Flume etc.


I'm all for your contempt of corporate network security policies honestly, but I think they're typically trying to stop most users that are not particularly sophisticated and basically assume that anyone with half a brain will get around eventually. That's why audit policies are much more rigorously enforced than network tunneling restrictions, for example.

I regularly defeat most policies if I really have to get work done but the real deterrent, sadly, is for employees to be fearful of getting caught and having to deal with the immense mountain of bullshit paperwork and training that is likely to follow. Completely meaningless paperwork filling is a form of disciplinary action in my opinion.


For example, at the US Air Force, the following policies are in place:

* Use of SSH is discouraged, because the traffic is encrypted, and the Air Force wants to be able to read all the traffic in and out of boxes.

* If you do get a waiver for SSH, the system admins require you to hand over all your SSH private keys, so that they can decrypt the traffic, read it, and re-encrypt it.


Yup! Make administration less confidential, but try and keep it as secure as possible. Inside attacks are a big problem in the financial industry too. I like the idea of controlling it through a hardened web-app, but may need a little help getting there.


Why? You can protect against insider threat by auditing the target system itself and streaming said audit logs to write only media.

Building a single point of compromise has no advantages over this and many disadvantages beyond just security.


Auditing can be a deterrent to an attack, but won't necessarily protect you against it. It's a good way of letting you know what has happened after the fact.

Think if you had a DB with financials (credit cards and such) in a isolated DMZ with SSH inbound/outbound blocked

How can you dump the DB and copy it off if all the traffic was "proxied" through this?? It's not like you can scp a tarball anywhere.

You can't forward ports and expose the DB outside of the DMZ either.

And it doesn't take admins to setup auditing or disable forwarding, you have physically disabled it.

I got to say I don't understand the single point of compromise thing. A single point of failure is a bad, but the less points of compromise the better. You identify your critical systems, you protect your critical systems. Spreading things out doesn't make you more secure.

Here is an old white paper on some things to think about SSH in your infrastructure.

http://www.sans.org/reading-room/whitepapers/vpns/security-i...

Your right in saying this is an unproven application!


You can tunnel through HTTPS too, given a client/server pair that supports HTTP CONNECT. Conversely it's possible to disable the SSH port forwarding function. From the firewall's point of view, they ought to be equally risky.


> Keyless in the first password manager to provide complete hacking protection.

Uh-Huh. Pretty strong words there. Lets for a moment ignore the fact that this claim is ridiculous and companies or individuals making it should never be trusted, I can't find anything on your webpage that goes beyond marketing copy and "it's Magic!" rhetoric.

Seriously, not one technical statement?

To stand any chance in that market I'd strongly advise you to include more detailed descriptions of how your "magic" works. And you can't use full-page photographs in it.


Thanks _cbdev for your input. We are currently in beta, hoping to get more input/suggestions to make our website better and to check the appetite/needs of technical and non-technical people. As you advised, we will include more technical description on our site. Thank you again!


>companies need to replace their switches.

Actually, most Switches are just fine and don't need replacing. IPv6 is a Layer 3 Protocol, most "Normal" Switches operate on Layer 2 (The Ethernet Level, which stays the same and (in the best case) does neither know nor care what goes on in Layers above). These can stay and most wouldn't even need to be reconfigured.

As for Layer 3 Switches (The ones that do some amount of Routing, too), most "brand-name" Models purchased in the last 10 Years should support IPv6.


The most hardship, from experience, comes from the apps, especially the home-grown ones.

Let me back this up with an anecdote from experience in dual-stacking the websites at my employer (a curious reader might notice that cisco.com, download.cisco.com, software.cisco.com, tools.cisco.com, cisco-apps.cisco.com are all dualstack. The last one is interesting because it hosts the ordering portal, with IPv6 being a transport for a non-trivial portion of the hardware orders).

While the main cisco.com was dualstack since v6 launch, the rest of the properties required more work, because there's bazillion different apps there, so were launched just a ~year ago.

And yet despite all the testing, once we've gone live post-testing, we realized there was one bug that slipped through. The name of the error quite especially ironic and the bug, while in a somewhat infrequently used portion, was very visible for IPv6-enabled users.

http://www.gossamer-threads.com/lists/nsp/ipv6/47796 for the full externally visible recount of the matter.

Back then the % of IPv6 users which was accessing the erroring function was low enough that we did not roll back the entire set of changes, and just had the fix developed and deployed, and the whole scenario was relatively painless. (Besides for some semi-friendly beat-up during IPv6 workgroup in RIPE meeting, where this error showed vividly since we had an IPV6-only pilot WiFi SSID along with the usual dualstack)

If the same story were to happen with 50% of IPv6 adoption ? That would hurt way way more.

The moral:

If you're a big shop - start auditing your apps now even if you do not think you need it until 3 years from now. If you're not sure - there's bazillion resources and people available to help, but for free and for money.

If you're a small shop and don't have any apps - RTFM, assess, and JustDoIt(tm), in a staged manner, of course, all disclaimers apply, etc. - the sooner you get a (small) chance to make your mistakes while doing the first steps with IPv6, the cheaper those mistakes will be. Of course best to avoid them, but.

Ok, I'm officially off my "IPv6 soapbox" on this thread, hopefully these were useful to some folks. ;-)


> > companies need to replace their switches.

> Actually, most Switches are just fine and don't need replacing. IPv6 is a Layer 3 Protocol, most "Normal" Switches operate on Layer 2 (The Ethernet Level, which stays the same and (in the best case) does neither know nor care what goes on in Layers above).

But these can't do routing, I assume. I think I may have misspoke, and said "switches" when I should have said "routers".

If routers from the last 10 years all support IPv6, that's probably a part of the reason that IPv6 access to Google in the US is 10% IPv6.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: