I already blogged about it when it was announced on December last year [0]. Adding SQL Server compatibility opens Postgres to new users, use cases, markets. It makes one of the most general-purpose databases to reach even more "purposes".
I hope work and discussions to integrate this upstream will start soon and will be fruitful. It won't be easy, but I believe it would be definitely worth the effort.
Well this is a big deal for me: my father has run his business off a sprawling MS Access application which I've struggled at various times to try and move to something more modern with no success.
I wonder if this will work for connecting to MS Access directly.
If the MS Access application is backed by an actual MS SQL server, maybe. But if (and in my experience this is more likely) it uses MS Access itself with MDB files (Microsoft's Jet database engine - imagine SQLite but inferior in every conceivable way), you're probably better off writing a tool in .NET or similar to migrate the data.
In a past job supporting an industrial controls firm, I wrote an interactive merge tool for AutoCAD Electrical parts catalog files (hideous little MDBs behind the scenes)...it was an unholy clusterfuck of C#, Lucene, NodeJS, and a smidgen of VueJS on the frontend. Not my proudest work, but it sure beat the usual tasks involving VBA in FactoryTalk.
It's interesting, I wouldn't have guessed the SQL dialect differences to be the main reason to pick one database over another.
Other comments on this suggest it's very important to them, which is a surprise to me.
Rather, I would have bet on either the performance characteristics for your particular situation, the knowledge of your programmers (or db admins) / recruitment pool, or perhaps integrations with other software.
Still, a very useful looking tool, don't want to knock it.
This may help PG take serious market share from MSSQL.
Also: I wonder if the shiny GUI tools that exist for MSSQL will then work from PG, and ... to what extend these tools will work with regular PG tables.
I really like DBeaver[1] (what I use myself), but I've seen stuff done by MSSQL DBAs using expensive GUI tools that have amazed me.
as a user of TSQL in a MSFT/Azure shop, I feel increasingly isolated from other dialects. Virtually all other databases these days use a dialect that's closer to PostGres. Does TSQL have a future? If you don't have legacy dbs and you have a choice of dbs, would TSQL ever make sense to use today?
T-SQL has huge warts, but is one of the better procedural SQL languages I've used. TDS (the wire protocol) also has warts, but it is vastly superior to the pg wire v3 protocol. Lastly, T-SQL is a single, integrated language, which means you can go from basic SQL to procedural SQL in the same batch. T-SQL also has first class support for multiple result sets which is very useful. T-SQL and TDS both support named parameters, where as PG only supports ordinal position.
TLDR: this is about some kind of adapter layer ("Babelfish") that lets you serve Microsoft SQL Server clients from a PostgreSQL backend, sort of like how Samba serves Microsoft SMB clients from a Linux file system. It has nothing to do with natural language translation. Oh well.
We've made the title be (a subtstring of) the article title now. Submitted title was "AWS (finally) drops code for Babelfish – the SQL Server to Postgres translator".
I don't see any enterprise customer using this. MS SQL server is relatively cheap. Now if this thing can work for Oracle or IBM DB2 , then may be it will be worth the risk.
At $14K/core [0] for the Enterprise edition, it doesn't fit exactly on my definition of cheap. Sure, it's 1/4th of Oracle's price, but still translates to 6-7 figures for even small deployments, which again is not cheap to me.
But I guess the main attractiveness for current SQL Server uses might be more on the licensing terms/compliance problems than the cost. Just trying to understand how to license SQL Server and how cores are counted already requires you to read a 42-page guide [1]. The risks associated from incorrectly purchasing licenses for your SQL Server cluster(s) may be a compelling reason to jump into open source with minimal migration costs (at least compared to migrating directly to Postgres or any other open source database).
Licenses are sold in 2-core packs, so it's $7,000 per core. (The link you included is the 2-core pack pricing.)
> still translates to 6-7 figures for even small deployments
The majority of deployments are 8 cores or less [0], so even if they were Enterprise - which they're not - they're 5 figures. Not saying it's cheap, but it's an order of magnitude less expensive than what you're suggesting.
I assumed it is $14k/core as for the standard edition it is quoted "Standard - per core". It is a weird for me that in the same table the number above is not per core. Yet you know much better than I do, so I will take your number: $7K/core.
Thank you for sharing so interesting data on the usage. This is very surprising to me, though. At least coming from my background (company providing Postgres Professional Services) we see much bigger deployments usually.
In any case, I expect servers to have replicas for high availability. So even an 8-core server would count for a total of 16 or 24 cores for a 2-3 node cluster, isn't it? And then this multiplied by the number of clusters that you have.
Some of our customers run 1,000+ cores on a single cluster. Or some others which run dozens of clusters, where each cluster is 3 nodes, with nodes in the range of 16 or 32 cores. That all adds to thousand-plus cores and arriving to the 6-7 figures (list price) that I was referring to. It may be surprising to me that in the SQL Server world deployments are significantly smaller. But very useful information!
What I believe is that for sure there's a very interesting use case here. AFAIK Amazon always makes decision based on customer requests and data they have. They have both Postgres and SQL Server on RDS. And if they created Babelfish and published it as a managed service, it's probably because there are a significant number of customer requests. Whether that is licensing savings, license/compliance uncertainty or anything else, I don't know for sure.
> In any case, I expect servers to have replicas for high availability. So even an 8-core server would count for a total of 16 or 24 cores for a 2-3 node cluster, isn't it? And then this multiplied by the number of clusters that you have.
As long as you're protected by Software Assurance, you get one free replica for high availability, another free one for disaster recovery, and yet another in Azure.
However, (or unless things have changed recently) you have to license at least 4 cores. The 2-core pack only exists so you can use it in conjunction with 4-core packs to license (e.g.) 6 cores.
I can get on the phone with a sales rep and get a custom quote in a few hours, without having to read a thing. And it'll probably be at least 30-50% (or more) discounted from whatever is listed. Microsoft has operated this way for decades. It's cost has also never once come up in a budget review meeting.
That's largely because alternatives like Postgres exist. If they didn't then I'm pretty sure the the conversation would be more along the lines of "take it or leave it."
People shouldn't underestimate the commoditization of databases that has occurred since MySQL and Postgres became acceptable alternatives.
Has nothing at all to do with Postgres, or underestimating anything. Microsoft has a very large, robust partner network that's able get great pricing and the cost of MSSQL really hasn't changed much over the years outside multi-core updates.
Those customers also know how to manage SQL Server, and the software running on won’t support Postgresql with Babelfish, in the sense that any support contract goes right out the window.
I think this is mostly for companies that run Postgresql, but need to bring in a single application which only supports SQL Server.
I wonder if you can make SQL Management Studio run against Babelfish.
postgres takes a lot of design inspiration from Oracle, but AFAIK has never made any real attempt at being strictly compatible with it.
You might be thinking of dblink in oracle's world, and foreign data wrapper (fdw) in postgres, which at least lets you transparently interact between the two for just DQL. DDL, procedural logic, etc definitely differs.
There's also EnterpriseDB, a commercial version of PostgreSQL that has added support for the Oracle SQL dialect, including the stored proc language (pl/SQL or whatever it's called).
I don't think it speaks the Oracle wire protocol though.
No, it doesn't speak the Oracle wire protocol.
But the Oracle pl/slq (source code compatibility) from Postgres was ported to DB2 and IBM claims 100% compatibility.
I already blogged about it when it was announced on December last year [0]. Adding SQL Server compatibility opens Postgres to new users, use cases, markets. It makes one of the most general-purpose databases to reach even more "purposes".
I hope work and discussions to integrate this upstream will start soon and will be fruitful. It won't be easy, but I believe it would be definitely worth the effort.
[0]: https://www.ongres.com/blog/aws_announces_open_source_postgr...