Like a complete application rewrite, switching relational database systems for a production application is rarely a good idea. Sometimes you need to do it anyways.
So, how do you migrate your data? Here is a complicated document about it and here is another more official looking complicated document.
The mysql foreign data wrapper enables accessing mysql tables directly from postgresql as if they were postgres tables. Please see the documentation for how to set up the foreign data wrapper.
Now you populate postgres tables with an analogous schema with:
insert into pg_table_whatever (select * from foreign_table_whatever);
The queries execute quite reasonably, enabling a quick switch to a new application layer accessing your new database. In a later blog post I’ll address some of the architecture considerations for doing a staged migration.
Unfortunately when it comes to stored procedures and triggers, be prepared to rewrite them in plpgsql.
Also it’s worth noting that you can create foreign tables based on queries against the remote mysql server – conceptually like a “foreign view” rather than “foreign table.” This may come in handy if your postgres schema is substantially different from the mysql schema and you want to leverage the mysql server’s query optimizer. Again, see the mysql foreign data wrapper’s documentation.
One persistent problem when maintaining etl systems can be propagating schema changes. To automatically generate foreign table definitions you can use the tool Foreign Table Discovery Tool.
It queries the mysql information schema through a foreign table. This will provide a scriptable way to have access to a mysql schema, as is, directly from postgresql.