That depends on how many columns your pivot table uses. Less columns = less data = lower price.
If the implementation is smart enough it could use BigQuery partitions for further cost savings.
Is the data size calculated based on raw data on which queries operate (for example all transactions with amount column), or on query result size (for example sum(amount) -single row) ?
BigQuery bills based on amount of data read. BigQuery is a column-store database, so a read will always touch every row of a table, but only the columns actually used. So e.g. "SELECT * LIMIT 1" will read the entire table, while "SELECT SUM(transaction.total)" will only read 1 column out of potentially hundreds. The latter query will be billed identically as a filtered query like "SELECT SUM(transaction.total) WHERE transaction.total > 10." Filtering on a different column will be billed more, because it needs to read the second column as well, regardless of how many rows (if any!) contribute to the result set.
Idiomatic BigQuery will make use of partitioning, so that a large dataset will span multiple tables in a way that you only read the tables of interest. (E.g. the Google Analytics integration partitions by date, so reporting on 1 month of data will only read 30 tables out of the 1,500 you might have.)
It's worth noting that depending on the amount of Data you capture in GA, each date table could be anywhere from 1s to 10s of GBs, and it can get pretty pricey to query from those tables.