Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... for PostgreSQL developers, not users.


From the "bad" section:

Robert Haas asked for suggestions toward the solution of some nasty data-corruption issues associated with the "multixact" feature [...] the relevant multixact changes were merged during the 9.3 development cycle. The 9.3 release happened in September 2013, but the fallout from that particular change is still being dealt with.

there is concern within the PostgreSQL community that its well-earned reputation for low bug rates is at risk


But it is a look at the process that eventually impacts users. It explains why upset has taken so long. A bit of chaos and a bit of politics.


I don't think upsert has taken so long because of the community and processes, but because the Postgresql team has high requirements for it: http://www.depesz.com/2012/06/10/why-is-upsert-so-complicate...

For example, upsert that is worth releasing should:

* Work with all unique indexes that may exist on a table, not only a primary key,

* Be correct even at highly concurrent writes, at any isolation level from Read Uncommitted to Serializable,

* Be fast even at highly concurrent writes, so no table locks, locks may not be held for more than an instant, and no re-running the entire transaction in a loop,

* Never throw an "another transaction modified data" exception at Read Committed isolation level, it should always correctly succeed as an insert or update even if another parallel transaction inserted or deleted the matching row.

I'm not familiar much with the other database engines, but my impression has been that their upsert or SQL merge do not provide all of these.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: