0

I have SQL database created by 3rd party program and I am importing some datas from excel table to sql db with python. Here is the previews of database and excel table;

enter image description here

As you see sql and excel columns' name are matched exactly. and here is code I use to import;

import pandas as pd
import sqlite3

#Paths
excel_path="C:/users/user/desktop/ACC_Import.xlsx"
sql_db_path="c:/users/P6_BD_DataBase_001"

#Defs
df=pd.read_excel(excel_path, dtype={"ACCT_SHORT_NAME":object}) #this dtype important, pandas turns to int which we don't want to ...
conn=sqlite3.connect(sql_db_path)
cur=conn.cursor()

def insert_excel_to_sql(df):
    for row in df.itertuples():
        data_tuple=(row.ACCT_ID,
        row.PARENT_ACCT_ID,
        row.ACCT_SEQ_NUM,
        row.ACCT_NAME,
        row.ACCT_SHORT_NAME,
        row.ACCT_DESCR,
        row.CREATE_DATE,
        row.CREATE_USER,
        row.UPDATE_DATE,
        row.UPDATE_USER,
        row.DELETE_SESSION_ID,
        row.DELETE_DATE)
        sqlite_insert_with_param='''
                    INSERT INTO ACCOUNT (ACCT_ID,PARENT_ACCT_ID,ACCT_SEQ_NUM,ACCT_NAME,
                    ACCT_SHORT_NAME,ACCT_DESCR,CREATE_DATE,CREATE_USER,
                    UPDATE_DATE,UPDATE_USER,DELETE_SESSION_ID,DELETE_DATE)
                    VALUES (?,?,?,?,?,?,?,?,?,?,?,?);
                    '''
        cur.execute(sqlite_insert_with_param,data_tuple)
    conn.commit()

I still type all columns' names one by one which I am sure that they are exactly the same.

Is there any other way importing excel table (sql and excel column names are exactly same) to sql by NOT typing all column names one by one ?

2 Answers 2

1

From the sqlite INSERT doc:

INSERT INTO table VALUES(...);

The first form (with the "VALUES" keyword) creates one or more new rows in an existing table. If the column-name list after table-name is omitted then the number of values inserted into each row must be the same as the number of columns in the table. In this case the result of evaluating the left-most expression from each term of the VALUES list is inserted into the left-most column of each new row, and so forth for each subsequent expression.

Therefore it is not necessary to type all the column names in the INSERT statement.

Sign up to request clarification or add additional context in comments.

1 Comment

yea not necessary to type all column names in INSERT. just figured thnx. but how can I send every df row as a tuple ? with tuple(df.iloc[1].to_list()) ?
0

First, @DinoCoderSaurus is right, not necessary to type column names in INSERT statement. For df column names, I used this path

df=pd.read_excel("Acc_Import.xlsx",dtype=object)

Reading excel table with dtypes as an object

for index in range(len(df)):
    row_tuple=tuple(df.iloc[index].to_list())
    cur.execute('''INSERT INTO ACCOUNT VALUES (?,?,?,?,?,?,?,?,?,?,?,?);
                    ''',row_tuple)

With reading "df dtype=object" returns row_tuple items' type "int,str,float" so, I was able to import excel datas to SQL database. image as below;

enter image description here

If I don't read df with dtype=object;

row_tuple=tuple(df.iloc[index].to_list())

row_tuple items' dtypes return "numpy.int64, numpy.float64" which causes "data mismatch error" when importing excel table to sql db. Image as below ;

enter image description here

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.