I'm designing a BigQuery job in python that updates and inserts into several tables. I thought of two ways to achieve that:
execute a query job and save the result into a temporary table with an update/insert indicator and process them after. But it's no clear how to update with python libraries.
load the whole data into a new partitioned table and skip updates/inserts. It takes a more space then I would like but partition expires in few days anyway.
Am I missing something? Is there other way to achieve this?