I have multiple thousands of csvs that I need to dump into postgres as a table each. But the problem is, these tables are not identical in structure. I'm looking for a way I can create the table on the fly from the structure of the csv and dump the csv into the table created. If I were to do it manually, it would involve 2 steps: 1. Create the table based on the csv data structure 2. Dump the csv data into the created table
But since I have thousands of these csvs, it would extremely inefficient to do this manually. I'm looking for a way I could dynamically create a postgres table based on a csv structure and dump data into this table and automate this entire process for thousands of files.
Most of the research I did points me to PG commands to dump data into a single existing table, but those solutions won't work here because the number of tables is huge here.
Efficient way to import a lot of csv files into PostgreSQL db - This pointed me to a similar problem, but his tables are all identical in structure, unlike mine.
pandas.read_csv&pandas.to_sql