Home Technology & Startups USA Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022 (Jennifer Elias/CNBC)