{ "packages": [ "/third_party/sketchingpy-0.2.4-py3-none-any.whl", "/third_party/toolz-0.12.1-py3-none-any.whl" ], "files": { "/buttons.pyscript?v=1743519029": "buttons.py", "/config_buttons.pyscript?v=1743519029": "config_buttons.py", "/const.pyscript?v=1743519029": "const.py", "/data_struct.pyscript?v=1743519029": "data_struct.py", "/history_viz.pyscript?v=1743519029": "history_viz.py", "/hist_viz.pyscript?v=1743519029": "hist_viz.py", "/legend.pyscript?v=1743519029": "legend.py", "/map_viz.pyscript?v=1743519029": "map_viz.py", "/preprocess.pyscript?v=1743519029": "preprocess.py", "/rates_viz.pyscript?v=1743519029": "rates_viz.py", "/results_viz_entry.pyscript?v=1743519029": "results_viz_entry.py", "/results_viz.pyscript?v=1743519029": "results_viz.py", "/scatter.pyscript?v=1743519029": "scatter.py", "/sweep_viz.pyscript?v=1743519029": "sweep_viz.py", "/symbols.pyscript?v=1743519029": "symbols.py" } }
💿 We don't recognize your browser. It may be out of date and unable to run the interactive elements on this page or it may just be a less common piece of software. If you run into issues, please consider updating or using Firefox!
📱 Parts of this experience involve experimenting with data inside tools optimized for larger screens. This window might be a little cramped. If you run into trouble, please try resizing this window, rotating your screen, or visiting us again on a larger device.

Climate-Driven Doubling of U.S. Maize Loss Probability

Interactive Simulation with Neural Network Monte Carlo

Introduction: Corn is in trouble.

Crop insurance, corn, and climate change: This website serves as an interactive exploration of what the future might look like in the US Corn Belt, complementing our scientific paper (preprint) as interactive supplemental material. In short, our work anticipates that the probability of crop losses will rise such that, for the kind of insurance we examine, climate change will lead to a doubling in claims probability around 2050 relative to a future without further warming.
Table of contents
Here's an outline of what is available on this website:
  • Introduction: This is the tab you are currently on which introduces this problem at a high level.
  • Rates: Exploration of how rates are set within crop insurance and highlighting possible policy levers.
  • ML & Hyperparameters: Discussion of machine learning hyperparameters considered with additional information on model performance.
  • Predicted Distributions: Examination of Monte Carlo outcomes with predicted distributions under different conditions.
  • Claims & Policy: Examination of how policy choices could change claims rates and repsonse to adverse conditions.
  • Neighborhoods: Exploration of individual neighborhood-level outcomes with options for gegographic visualization.
  • About: Information about these tools.
  • Settings: Settings including accessibility options.
  • Glossary: Terms used within this website.
  • Repository: Source code for our modeling and this website.
  • Terms and Privacy: Terms of use and privacy policy.
Climate change threatens the future of corn. Studies expect planet-wide maize yields to drop by up to 24% by the end of this century (Jägermeyr et al. 2021). Climate change alters growing conditions (Williams et al 2024), including a higher frequency and severity of stressful weather (Dai 2013) which can impact yields (Sinsawat et al 2004; Marouf et al 2013; Lobell et al 2020). In addition to threatening farmer revenue (Sajid et al 2023), these changes can also challenge the institutions tasked with protecting and helping those growers (Hanrahan 2024).
How might we use AI to understand this future and prepare for it? These explorables (Victor 2011) detail key aspects of our analysis of future maize yields. Our work specifically considers the future risk for one of these important institutions: the US Federal Crop Insurance Program. Climate change increasingly threatens crucial resources like the FCIP which provide important stability (Mahul and Stutley 2010) to a food system progressively impacted by climate change (Ray et al 2015). However, despite these challenges, we find important information to inform climate adaptation within our AI results.
You only need a web browser. This website houses tools which complement an academic paper (preprint), allowing for further exploration beyond the manuscript. You don't need to install any special software to use it.
Technical details about the interactives
We will work on interactive labs embedded within this website. All you need is your web browser.
You do not need to know about the technical specifics of how these interactives work in order to use this website. However, for those interested, these will be small Python experiences written in Sketchingpy that run via WebAssembly using PyScript. You don't need to install any special software to make this happen. See our open source repository and our open souce credits.
Let's get started. Each section of this website dives into one aspect of analysis through a "mini-lab" in which you can operate analysis tools as described in our scientific paper (preprint).
Citations for intro
The citations for this introduction section are as follows:
  • A. Dai, “Increasing drought under global warming in observations and models,” Nature Clim Change, vol. 3, no. 1, pp. 52–58, Jan. 2013, doi: 10.1038/nclimate1633.
  • R. Hanrahan, “Crop Insurance Costs Projected to Jump 29%.” University of Illinois, Feb. 15, 2024. [Online]. Available: https://farmpolicynews.illinois.edu/2024/02/crop-insurance-costs-projected-to-jump-29/
  • J. Jägermeyr et al., “Climate impacts on global agriculture emerge earlier in new generation of climate and crop models,” Nat Food, vol. 2, no. 11, pp. 873–885, Nov. 2021, doi: 10.1038/s43016-021-00400-y.
  • D. B. Lobell, J. M. Deines, and S. D. Tommaso, “Changes in the drought sensitivity of US maize yields,” Nat Food, vol. 1, no. 11, pp. 729–735, Oct. 2020, doi: 10.1038/s43016-020-00165-w.
  • K. Marouf, M. Naghavi, A. Pour-Aboughadareh, and H. Naseri rad, “Effects of drought stress on yield and yield components in maize cultivars (Zea mays L),” International Journal of Agronomy and Plant Production, vol. 4, pp. 809–812, Jan. 2013.
  • O. Mahul and C. Stutley. “Government Support to Agricultural Insurance : Challenges and Options for Developing Countries.” The World Bank Open Knowledge Repository. 2010. [Online]. Available: https://openknowledge.worldbank.org/entities/publication/8a605230-3df5-5a4e-8996-5a6de07747
  • NCEI, “Corn Belt.” NOAA, 2024. [Online]. Available: https://www.ncei.noaa.gov/access/monitoring/reference-maps/corn-belt
  • D. K. Ray, J. S. Gerber, G. K. MacDonald, and P. C. West, “Climate variation explains a third of global crop yield variability,” Nat Commun, vol. 6, no. 1, p. 5989, Jan. 2015, doi: 10.1038/ncomms6989.
  • O. Sajid, A. Ortiz-Bobea, J. Ifft, and V. Gauthier, “Extreme Heat and Kansas Farm Income.” Farmdoc Daily of Department of Agricultural and Consumer Economics, University of Illinois at Urbana-Champaign, Jul. 26, 2023. [Online]. Available: https://farmdocdaily.illinois.edu/2023/07/extreme-heat-and-kansas-farm-income.html
  • V. Sinsawat, J. Leipner, P. Stamp, and Y. Fracheboud, “Effect of heat stress on the photosynthetic apparatus in maize (Zea mays L.) grown at control or high temperature,” Environmental and Experimental Botany, vol. 52, no. 2, pp. 123–129, Oct. 2004, doi: 10.1016/j.envexpbot.2004.01.010.
  • B. Victor, “Explorable Explanations.” Bret Victor, Mar. 10, 2011. [Online]. Available: http://worrydream.com/ExplorableExplanations/
  • E. Williams, C. Funk, P. Peterson, and C. Tuholske, “High resolution climate change observations and projections for the evaluation of heat-related extremes,” Sci Data, vol. 11, no. 1, p. 261, Mar. 2024, doi: 10.1038/s41597-024-03074-w.
Text on technical setup and preparation also mention the following: Some definition citations are in glossary.

Rates Explorer

This first interactive tool explores the structure of the popular Multi-Peril Crop Insurance. We specifically walk through the rates setting aspects of MPCI's Yield Protection plan.
Details about variables used for rate setting
These are not technical definitions but, conceptually, here are simplified meanings of each variable in rate setting and the hypothetical rate interactive.
  • Insured Unit APH: APH for the set of fields in the risk unit for which a policy is being created.
  • County average yield: The average yield (APH equivalent) for all fields in the county.
  • Projected price: Expected price at which the crop will sell often based on daily futures market data.
  • County preimum rate: Based on the crop and county, how much it costs to insure one future dollar of expected crop. A value of 4% means 4 cents for each dollar insured.
  • Subsidy rate: What percent of the overall policy cost is covered by subsidy. A value of 55% means that the grower pays for 45% of the overall cost of the policy.
  • Coverage amount: Up to what percent of APH will be covered. If one sees a yield that is only 75% of APH then the policy will pay out 5% (75% minus 70%) of APH.
To expand upon or clarify these simplified or abbreviated definitions, see Plastina and Edwards (2020).
A mixture of many variables determines the price a producer pays for insurance (Plastina and Edwards 2020). First, the grower chooses how much of their expected yields they want to insure (horizontal axis in the interactive). These expectations are set thorugh reports of a grower's yields where those histories are called Actual Production Histories. Still, in determining the specific cost of a policy, the plan combines multiple variables together: certain county-level data, the APH for a risk unit, and a series of adjustments. These factors set a price and a subsidy from the government (vertical axis in the interactive). See details about variables used for rate setting.
This section contains interactive components. It will take just a moment to load the lab.
Load Interactives
Loading...
Rates: Visualization showing how subsidy changes according to different parameters either in an average-based APH or the proposed standard deviation-based APH. Uses Plastina and Edwards (2020) as a starting point.
The formulas which can conceptually simulate rate setting:
  • overall price = insured unit aph * projected price * county preimum rate * coverage amount
  • subsidy = subsidy rate * overall price
  • cost to grower = (1 - subsidy rate) * overall price
This is an alternative to a visualization which evaluates these equations for some examples. One such set of examples:
  • overall price = 200 bushel / acre * $4 / bushel * 4% * 75% = $24 / acre
  • subsidy = 55% * $24 / acre = $13.20 / acre
  • cost to grower = (1 - 55%) * $24 = $10.80 / acre
The subsidy increases as APH increases. Uses Plastina and Edwards (2020) as a starting point.
The rates visualization has the following controls:
  • Esc: Exit the visualization
  • i: Change insured unit APH
  • c: Change county average yield
  • p: Change projected price
  • r: Change county premium rate
  • s: Change subsidy rate
  • a: Change coverage amount
  • t: Change APH type
  • o: Change output variable
The visualization will need focus in order to recieve keyboard commands.
Note that the horizontal axis in the interactive representing desired coverage level is currently defined as a percent of APH. Other interactives explore what happens if this includes an understanding of yield volatility. Changes to APH-based coverage levels could happen by 508h or by changing the law (CFR and USC). For example:
the level of coverage ... may be purchased at any level not to exceed 85 percent of the [expected] individual yield ... the yield for a crop shall be based on the actual production history for the crop
Text specifics can dramatically change the outlook for both crop insurance and the farmers. For example, later explorables on this site consider what might happen if that definition were to change to this:
the level of coverage ... may be purchased at any level not to exceed 1.5 standard deviations below the [expected] individual yield ... the yield for a crop shall be based on the actual production history for the crop
Citations for this section
The citations for this section are as follows:
  • CFR and USC, Crop insurance. [Online]. Available: https://uscode.house.gov/view.xhtml?req=(title:7%20section:1508%20edition:prelim)
  • R. Chite et al., “Agriculture: A Glossary of Terms, Programs, and Laws, 2005 Edition.” Congressional Research Service, Jun. 16, 2005. [Online]. Available: https://crsreports.congress.gov/product/pdf/RL/97-905/6
  • FCIC, “Common Crop Insurance Policy 21.1-BR.” United States Department of Agriculture, Nov. 2020. [Online]. Available: https://www.rma.usda.gov/-/media/RMA/Publications/Risk-Management-Publications/rma_glossary.ashx?la=en
  • M. Hargrave, “Standard Deviation Formula and Uses vs. Variance.” Investopedia, May 23, 2024. [Online]. Available: https://www.investopedia.com/terms/s/standarddeviation.asp
  • NCEI, “Corn Belt.” NOAA, 2024. [Online]. Available: https://www.ncei.noaa.gov/access/monitoring/reference-maps/corn-belt
  • J. Osiensky, “Understanding the Price Side of Revenue Crop Insurance.” Washington State University, Aug. 19, 2021. [Online]. Available: https://smallgrains.wsu.edu/understanding-the-price-side-of-revenue-crop-insurance/
  • A. Plastina and W. Edwards, “Yield Protection Crop Insurance.” Iowa State University Extension and Outreach, Dec. 2020. [Online]. Available: https://www.extension.iastate.edu/agdm/crops/html/a1-52.html
  • A. Plastina and S. Johnson, “Supplemental Coverage Option (SCO) and Enhanced Coverage Option (ECO).” Iowa State University Extension and Outreach, Oct. 2022. [Online]. Available: https://www.extension.iastate.edu/agdm/crops/html/a1-44.html
  • G. Schnitkey, N. Paulson, and C. Zulauf, “Privately Developed Crop Insurance Products and the Next Farm Bill.” Unviersity of Illinois, Mar. 14, 2023. [Online]. Available: https://farmdocdaily.illinois.edu/2023/03/privately-developed-crop-insurance-products-and-the-next-farm-bill.html
  • F. Tsiboe and D. Turner, “Crop Insurance at a Glance.” Economic Research Service, USDA, May 03, 2023. [Online]. Available: https://www.ers.usda.gov/topics/farm-practices-management/risk-management/crop-insurance-at-a-glance/
Some definition citations are in glossary.

Hyper-Parameters Explorer

Our neural network learns from past growing conditions and their associated outcomes (Williams et al 2024; Lobell et al 2015) in order to predict future yields given climate projections (Williams et al 2024).
More details about inputs, outputs, and climate variables
The inputs to the neural network include climate variables, year, and baseline yield (mean and std of all historic yields for the cell). The output is mean and std of changes to that future yield for the grid cell. Note that our scientific paper (preprint) considers other non-normal distribution shapes. Anyway, yield data come from SCYM (Lobell et al 2015). Historic growing conditions and future climate projections come from CHC-CMIP6 (Williams et al 2024). The climate variables are as follows (Williams et al 2024) where all are daily:
  • rhn: overall relative humidity
  • rhx: relative humidity peak
  • tmax: maximum temperature
  • tmin: minimum temperature
  • chirps: precipitation
  • svp: saturation vapor pressure
  • vpd: vapor pressure deficit
  • wbgt: wet bulb globe temperature
These daily climate values are summarized to min, max, mean, and std of daily values per month per grid cell before going to the neural network. We use the dataset variable names for consistency.
First, we divide up the US Corn Belt into a geographic grid, breaking up the problem so it becomes tractable for modeling. Next, our network goes year by year, forecasting the yields for each cell ("neighborhood") in that grid. Within these small areas, the network describes the range of expected changes to yield as a mean and standard deviation per cell. This is further discussed in our scientific paper (preprint).
More details about the grid
We divide the US Corn Belt into small groups of fields to create a grid where each cell is about 28 by 20 kilometers in size (Haugen 2020). This happens through four character geohashing, an algorithm that helps create these grids (Niemeyer 2008). Every piece of land ends up in exactly one grid cell but there may be more land dedicated to growing maize in some areas versus others. Therefore, our model gives more weight to a cell which has more corn compared to a cell with less.
This interactive lets us try out different settings for our neural network. These different configuration options are often called hyper-parameters. Our tool looks at different depths, L2s, and dropouts as well as removing individual variables from the model to see how it performs. See hyper-parameter details to learn more about what each setting does.
Details of hyper-parameters
Optimizing the model requires a lot of experimentation. In general, we can follow some basic rules:
  • A large distance between training set and validation set MAE suggests the model pays too much attention to noise in the data or is "overfit" and, in this case, need to increase L2 / dropout or decrease depth (Koehrsen 2018).
  • Error too far to the upper right suggests "underfit" and may have to increase depth or decrease dropout / L2 (Koehrsen 2018).
All that said, sometimes changing the parameters does not have the intended effect. Instead, data scientists building these models simply have to try a lot of different combinations of our hyper-parameters:
  • Depth: The number of layers in the network (Brownlee 2020). Deeper networks can learn more sophisticated lessons from the data but can require more data to train (Brownlee 2019) otherwise it may memorize examples instead of learning broader trends (Brownlee 2020).
  • L2: A penalty on the model for very deep pathways between neurons (Oppermann 2020). Conceptually, this avoids focusing too much on small specifics in the data unless there is strong evidence for that connection. Turning this "knob" up sets a higher bar for the evidence required for those deep connections.
  • Dropout: Randomly turn off neurons while training (Srivastava et al 2014). Conceptually, this encourages finding multiple ways to use inputs in making predictions. This avoids relying on one small aspect of the data (though at the risk of disrupting learning). This "knob" turned up means neurons are disabled with higher frequency.
  • Blocks: Sometimes certain data can be distracting to the model or simply having a large number of variables can become overwhelming (Karanam 2021). This option lets us take out individual climate variables.
Clicking the "Try Model" button will test a new model where we split the historic data such that the network trains from one piece (training set) before we grade it on the part it has not seen before (validation set). We score performance with mean absolute error (Acharya 2021). For example, if the model predicts that a neighborhood's yield will drop by 10% and it actually drops by 15%, the error would be 5%. Our goal: go left on the horizontal axis (minimize error predicting mean) and to the bottom on the vertical axis (minimize error predicting std).
This section contains interactive components. It will take just a moment to load the lab.
Load Interactives
Loading...
Hyper-parameters: Visualization that allows users to try different neural network configurations and see their validation set performance. How would you balance the error between predicted mean and predicted standard deviation?
Alternative data downloads:
  • Sweep Ag All: Information about candidates considered in our model sweep.
These are CSV files which can be opened in spreadsheet software. These are available under the CC-BY-NC License.
The hyper-parameters visualization has the following controls:
  • Esc: Exit the visualization
  • n: Change depth
  • l: Change L2
  • d: Change dropout
  • b: Change variable block
  • t: Try the currently selected configuration
  • s: Execute model sweep
The visualization will need focus in order to recieve keyboard commands.
It can be helpful to see how the neural network responds to different configurations. However, we can also ask a computer to optimize these metrics through a sweep where it tries different combinations of different parameters to try to find the best one (Joseph 2018). Execute this by clicking / tapping the "Run Sweep" button. For more details, see our scientific paper (preprint). We use that optimized configuration in the rest of this explorable explanation.
More about why the sweep selected this configuration
Even with a computer trying hundreds of models, we still have to make some choices:
  • Do we favor the model's ability to predict the mean equally with its ability to predict standard deviation?
  • After chosing a model configuration, we retrain with all available data so do we want to emphasize a model with the best metrics in the validation set or one that has just slightly worse performance but less overfit so might do better with more examples to learn from?
  • Do we want a deeper model which might benefit from more data or a shallower model that seems to perform well in the smaller training dataset used in the sweep?
Sadly there is no one right answer but, in the selection we are using for the rest of this online lab, we prefer the ability to predict mean over std by a ratio in weights of 3 to 1. We also elect for a slightly deeper model in anticipation of retraining with all historic data.
The errors in the visualization reflect a smaller dataset than the one used to train the final model. After retraining with more information, the model sees an MAE of 6% for mean prediction and 2% for std prediction.
More about model performance and uncertainty
During the model sweep, we have to divide up our data into three different sets. In this process, we split the data by year. For example, all of the observations from 2011 end up in only one of the sets (Shah 2017):
  • The validation set allows us to compare different models together (what is used in the visualization).
  • The test set provides a prediction of future performance.
  • Only the training set teaches the model while we try to optimize parameters like L2, dropout, etc.
That said, after determining our model preferred model configuration, we can have the model learn from the combined validation and training sets before we make a final estimation of error we expect in the future using the test set. In this final trial, we see an MAE of 6% for mean prediction and 2% for std prediction. We can later use an understanding of this model uncertainty in Monte Carlo.
Anyway, with this neural network built, we run Monte Carlo simulations to see what the future of MPCI and YP might look like while incorporating these measures of neural network uncertainty.
Citations for this section
The citations for this section are as follows:
  • S. Acharya, “What are RMSE and MAE?” Towards Data Science, May 13, 2021. [Online]. Available: https://towardsdatascience.com/what-are-rmse-and-mae-e405ce230383
  • P. Baheti, “The Essential Guide to Neural Network Architectures.” V7Labs, Jul. 08, 2021. [Online]. Available: https://www.v7labs.com/blog/neural-network-architectures-guide
  • J. Brownlee, “How Much Training Data is Required for Machine Learning?” Guiding Tech Media, May 23, 2019. [Online]. Available: https://machinelearningmastery.com/much-training-data-required-machine-learning/
  • J. Brownlee, “How to Control Neural Network Model Capacity With Nodes and Layers.” Guiding Tech Media, Aug. 25, 2020. [Online]. Available: https://machinelearningmastery.com/how-to-control-neural-network-model-capacity-with-nodes-and-layers/
  • B. Haugen, “Geohash Size Variation by Latitude.” Mar. 14, 2020. [Online]. Available: https://bhaugen.com/blog/geohash-sizes/
  • R. Joseph, “Grid Search for Model Tuning.” Towards Data Science, Dec. 29, 2018. [Online]. Available: https://towardsdatascience.com/grid-search-for-model-tuning-3319b259367e
  • S. Karanam, “Curse of Dimensionality — A ‘Curse’ to Machine Learning.” Towards Data Science, Aug. 10, 2021. [Online]. Available: https://towardsdatascience.com/curse-of-dimensionality-a-curse-to-machine-learning-c122ee33bfeb
  • W. Koehrsen, “Overfitting vs. Underfitting: A Complete Example.” Towards Data Science, Jan. 28, 2018. [Online]. Available: https://towardsdatascience.com/overfitting-vs-underfitting-a-complete-example-d05dd7e19765
  • D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” 2014. doi: 10.48550/ARXIV.1412.6980.
  • I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization,” in International Conference on Learning Representations, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:53592270
  • D. B. Lobell, D. Thau, C. Seifert, E. Engle, and B. Little, “A scalable satellite-based crop yield mapper,” Remote Sensing of Environment, vol. 164, pp. 324–333, Jul. 2015, doi: 10.1016/j.rse.2015.04.021.
  • A. Maas, A. Hannun, and A. Ng, “Rectifier Nonlinearities Improve Neural Network Acoustic Models,” in Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia: JMLR, 2013.
  • NCEI, “Corn Belt.” NOAA, 2024. [Online]. Available: https://www.ncei.noaa.gov/access/monitoring/reference-maps/corn-belt
  • G. Niemeyer, “geohash.org is public!” Labix Blog, Feb. 26, 2008. [Online]. Available: https://web.archive.org/web/20080305102941/http://blog.labix.org/2008/02/26/geohashorg-is-public/
  • A. Oppermann, “Regularization in Deep Learning — L1, L2, and Dropout.” Towards Data Science, Feb. 19, 2020. [Online]. Available: https://towardsdatascience.com/regularization-in-deep-learning-l1-l2-and-dropout-377e75acc036
  • T. Shah, “About Train, Validation and Test Sets in Machine Learning.” Towards Data Science, Dec. 06, 2017. [Online]. Available: https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7
  • D. Shiffman, The nature of code: simulating natural systems with JavaScript. San Francisco: No Starch Press, 2024.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929–1958, 2014.
  • E. Williams, C. Funk, P. Peterson, and C. Tuholske, “High resolution climate change observations and projections for the evaluation of heat-related extremes,” Sci Data, vol. 11, no. 1, p. 261, Mar. 2024, doi: 10.1038/s41597-024-03074-w.
Some definition citations are in glossary.
Data downloads for methods
The data downloads for this methods section ("next, a neural network") are as follows: These are available under the CC-BY-NC License.

Predicted Distributions Explorer

Having made our neural network, Monte Carlo next examines what crop insurance outcomes might look like in the future given what our model learned. This lets us calculate loss frequency and severity, revealing how often we expect claims in the future. While running these trials, we can also incorporate neural network uncertainty.
Details about the simulations
In our Monte Carlo, we sample the neural network's outputs. However, to account for uncertainty caused by our model, we also draw from the measurements of errors we've seen from our neural network (Yanai 2010). Finally, we also account for the size of risk units (RMA 2024) since policies cover collections of fields and not geohashes or individual fields. Altogether, this paints a picture of changes to yield relative to APH at the 2030 and 2050 timeframes. For more information, see our scientific paper (preprint).
This section contains interactive components. It will take just a moment to load the lab.
Load Interactives
Loading...
Distribution: Visualization that explores the details of the results in depth. Which type of loss on which time horizon are we most worried about?
Alternative data downloads:
  • Sim Hist: Frequency information about changes from historic average geohash yields. Predicted refers to simulations with climate change and counterfactual refers to simulations without further warming.
These are CSV files which can be opened in spreadsheet software. These are available under the CC-BY-NC License.
The distribution visualization has the following controls:
  • Esc: Exit the visualization
  • y: Change year
  • c: Change coverage
  • v: Change vs historic or counterfactual
  • u: Change unit size
The visualization will need focus in order to recieve keyboard commands.
These data tell many different stories but consider a few different factors at play in these simulations:
  • Time: Things get worse as climate change progresses with the 2050 series seeing higher claims rates than 2030.
  • Coverage levels: Those insuring at higher levels (85%) see some very high claims rates under climate change.
  • Unit size: The effects of climate change become less pronounced when simulating with the full insured unit where there may be more fields in a single policy compared to sub-unit which simulates at a smaller scale (simulates 5 character geohashes and near-individual field samples). This is due to the "portfolio effect" where larger units offer more chances for high performing fields to offset fields with worse outcomes (Knight et al 2024).
Regardless, the simulations with further warming (SSP245) generally fare worse than those that expect current growing conditions to continue into the future.
More about unit size
Due to the portfolio effect (where combining more fields together into an insured unit offers more opportunities for good outcomes in some fields to offset the negative outcomes in others), one of the more influential parameters on these results is unit size. In our Monte Carlo, we sample the actual unit size as reported by the Risk Management Agency (RMA 2024). However, in experiments where we either increase or decrease that size, this portfolio effect impacts both the climate change simulations and the counterfactual simulation without further warming. This means that the gap between the climate change and non-climate change simulations persists. This follow up result may suggest that these concerns remain regardless of how unit size may evolve.
Taken altogether, these results reveal a very specific threat to corn yields and crop insurance: the APH does not necessarily capture increasing risk even as the rate of claims for which YP pays out more than doubles in the 2050 series. This specific higher volatility outpacing changing average yields used in the rate setting process is at the center of the risk posed to crop insurance.
More about simulation resolution
The resolution of these data could also influence results. In geohashing we currently use the 4 character size corresponding to 28 by 20 kilometers (Haugen 2020). We could also try:
  • Making the grid cells larger: Currently the data are finding significant results in most grid cells so increasing the sample at the cost of predictive resolution offers very limited benefit.
  • Making the grid cells smaller: We attempt to approximate this in the sub-unit size results with simulation of 5 character geohashes. That said, this lowers the frequency with which a grid cell sees significant results substantially though this number can vary from simulation to simulation. This makes sense as the 5 character level nears the resolution of the underlying data so increasing resolution introduces unacceptable noise.
Altogether, we continue to suggest use of the 4 character geohash.
All of this in mind, we will next explore possible policy adaptations that take into account the specific shape of the threat that we uncovered in this step.
Citations for this section
The citations for this section are as follows:
  • B. Haugen, “Geohash Size Variation by Latitude.” Mar. 14, 2020. [Online]. Available: https://bhaugen.com/blog/geohash-sizes/
  • T. O. Knight, K. H. Coble, B. K. Goodwin, R. M. Rejesus, and S. Seo, “Developing Variable Unit-Structure Premium Rate Differentials in Crop Insurance,” American Journal of Agricultural Economics, vol. 92, no. 1, pp. 141–151, 2010.
  • R. L. Nielsen, “Historical Corn Grain Yields in the U.S.” Purdue University, Feb. 2023. [Online]. Available: https://www.agry.purdue.edu/ext/corn/news/timeless/yieldtrends.html
  • G. Niemeyer, “geohash.org is public!” Labix Blog, Feb. 26, 2008. [Online]. Available: https://web.archive.org/web/20080305102941/http://blog.labix.org/2008/02/26/geohashorg-is-public/
  • RMA, “State/County/Crop Summary of Business.” United States Department of Agriculture, 2024. [Online]. Available: https://www.rma.usda.gov/SummaryOfBusiness/StateCountyCrop
  • R. D. Yanai, J. J. Battles, A. D. Richardson, C. A. Blodgett, D. M. Wood, and E. B. Rastetter, “Estimating Uncertainty in Ecosystem Budget Calculations,” Ecosystems, vol. 13, no. 2, pp. 239–248, Mar. 2010, doi: 10.1007/s10021-010-9315-8.
Some definition citations are in glossary.
Data downloads for results
The data downloads for this results section are as follows:
  • sim_hist.csv: Frequency information about changes from historic average geohash yields. Predicted refers to simulations with climate change and counterfactual refers to simulations without further warming.
These are available under the CC-BY-NC License.

Claims Visualization

We have a number of options to confront these increases in risk. For example, in addition to environmental co-benefits, regenerative practices can improve yield stability (Bowles et al 2020; Renwick et al 2021; Hunt et al 2020). Still, recalling the rates explorer, the average-based APH may hinder adaptation. Despite valuable resilience offered by regenerative agriculture, these important practices may not always improve mean yields or can even come at the cost of a slightly reduced average (Deines et al 2023). This may be part of why, even though they guard against elevations in the probability of loss events (Renwick et al 2021), crop insurance may discourage these resilience-building steps (Wang et al 2021; Chemeris et al 2022).
Some options for confronting this risk
Though not an exhaustive list, here are some proposals from the literature: As described later, the proposed policy changes do not necessarily mandate a specific option.
Alternative formulations require more reserach. However, for the sake of exploration, use this next interactive to consider if one insures up to a certain number of standard deviations below average instead of insuring up to a percentage below average yield.
This section contains interactive components. It will take just a moment to load the lab.
Load Interactives
Loading...
Claims: Visualization that explores how using mean and standard deviation influences different behavior. Which kinds of growing patterns does each seem to encourage?
Alternative data downloads:
  • export_claims.csv: The claims rate expected under different scenarios. Here threshold refers to coverage level (YP is 0.25 meaning 25% below average), thresholdStd refers to the translation of an average-based APH to std, and offsetBaseline refers to if this includes APH changes over time (we recommend using "always").
These are CSV files which can be opened in spreadsheet software. These are available under the CC-BY-NC License.
The claims visualization has the following controls:
  • Esc: Exit the visualization
  • t: Change type
  • s: Change between high and low stability
The visualization will need focus in order to recieve keyboard commands.
In examining hypothetical growing histories, what kinds of behaviors do each type of APH (the current average-based vs a possible std-based option) encourage? If an average-based approach encourages one to drive up the mean as high as possible, this alternative offers balance where increasing the average and stability both become beneficial. This recognizes the value that some growers offer the broader system in the form of resilience during difficult conditions.
More about the policy
Using recent data, the equivalent to insuring up to 25% below average using present-day risk is insuring up to about 1.5 standard deviations below average though this does already have some small effects absent behavior change as units not typically making claims pull down that equivalency or, in other words, it already shifts exposure towards more stable yields. Here is how this would get formulated:
  • Average-based (current): loss = max(coverage percentage * average yield - actual yield, 0)
  • Std-based (proposed): loss = max(coverage standard deviations * average yield) / (yield standard deviation) - actual yield, 0)
Before moving on, there have been some questions of the effectiveness of using crop insurance to encourage certain behaviors (Connor et al 2022). To that end, note that this method offers unique flexibility: while it could help provide insurance incentive for practices like regenerative agriculture, any change which offers stability gets reward similar to how any practice improving yields operate today. Agnostic options may offer a possible non-prescriptive systems change and may address some prior concerns (Connor et al 2022).
Some of these changes may take place through 508h but recall that law sets some of these structures (CFR and USC). Therefore, some adaptation may require legislative action. In any case, we will next conclude our conversation by looking for other insights from these data.
Citations for this section
The citations for this section are as follows:
  • T. M. Bowles et al., “Long-Term Evidence Shows that Crop-Rotation Diversification Increases Agricultural Resilience to Adverse Growing Conditions in North America,” One Earth, vol. 2, no. 3, pp. 284–293, Mar. 2020, doi: 10.1016/j.oneear.2020.02.007.
  • E. E. Butler and P. Huybers, “Adaptation of US maize to temperature variations,” Nature Clim Change, vol. 3, no. 1, pp. 68–72, Jan. 2013, doi: 10.1038/nclimate1585.
  • CFR and USC, Crop insurance. [Online]. Available: https://uscode.house.gov/view.xhtml?req=(title:7%20section:1508%20edition:prelim)
  • A. Chemeris, Y. Liu, and A. P. Ker, “Insurance subsidies, climate change, and innovation: Implications for crop yield resiliency,” Food Policy, vol. 108, p. 102232, Apr. 2022, doi: 10.1016/j.foodpol.2022.102232.
  • L. Connor, R. M. Rejesus, and M. Yasar, “Crop insurance participation and cover crop use: Evidence from Indiana county‐level data,” Applied Eco Perspectives Pol, vol. 44, no. 4, pp. 2181–2208, Dec. 2022, doi: 10.1002/aepp.13206.
  • J. M. Deines et al., “Recent cover crop adoption is associated with small maize and soybean yield losses in the United States,” Global Change Biology, vol. 29, no. 3, pp. 794–807, Feb. 2023, doi: 10.1111/gcb.16489.
  • N. D. Hunt, M. Liebman, S. K. Thakrar, and J. D. Hill, “Fossil Energy Use, Climate Change Impacts, and Air Quality-Related Human Health Damages of Conventional and Diversified Cropping Systems in Iowa, USA,” Environ. Sci. Technol., vol. 54, no. 18, pp. 11002–11014, Sep. 2020, doi: 10.1021/acs.est.9b06929.
  • S. Keronen, M. Helander, K. Saikkonen, and B. Fuchs, “Management practice and soil properties affect plant productivity and root biomass in endophyte‐symbiotic and endophyte‐free meadow fescue grasses,” J of Sust Agri & Env, vol. 2, no. 1, pp. 16–25, Mar. 2023, doi: 10.1002/sae2.12035.
  • R. Mangani, K. M. Gunn, and N. M. Creux, “Projecting the effect of climate change on planting date and cultivar choice for South African dryland maize production,” Agricultural and Forest Meteorology, vol. 341, p. 109695, Oct. 2023, doi: 10.1016/j.agrformet.2023.109695.
  • L. L. R. Renwick et al., “Long-term crop rotation diversification enhances maize drought resistance through soil organic matter,” Environ. Res. Lett., vol. 16, no. 8, p. 084067, Aug. 2021, doi: 10.1088/1748-9326/ac1468.
  • T. Tian et al., “Genome assembly and genetic dissection of a prominent drought-resistant maize germplasm,” Nat Genet, vol. 55, no. 3, pp. 496–506, Mar. 2023, doi: 10.1038/s41588-023-01297-y.
  • R. Wang, R. M. Rejesus, and S. Aglasan, “Warming Temperatures, Yield Risk and Crop Insurance Participation,” European Review of Agricultural Economics, vol. 48, no. 5, pp. 1109–1131, Nov. 2021, doi: 10.1093/erae/jbab034.
Some definition citations are in glossary.
Data downloads for discussion
The data downloads section are as follows:
  • export_claims.csv: The claims rate expected under different scenarios. Here threshold refers to coverage level (YP is 0.25 meaning 25% below average), thresholdStd refers to the translation of an average-based APH to std, and offsetBaseline refers to if this includes APH changes over time (we recommend using "always").
These are available under the CC-BY-NC License.

Neighborhood Explorer

So far, we have explored data aggregated across many neighborhoods. This final graphic maps those neighborhoods individually. Note that this refers to statistically significant results to cut through noise but, if desired, one can turn off that filter.
More about the statistical tests
Around 95% of maize-growing acerage examined in this study see statistically significant results (p < 0.05 / n) in at least one year of the 2050 series. We determine this by a Bonferroni-corrected (Bonferroni 1935; McDonald 2014) Mann Whitney U (Mann and Whitney 1947; McDonald 2014) per neighborhood per year as variance may differ between the two expected and counterfactual sets (McDonald 2014).
This section contains interactive components. It will take just a moment to load the lab.
Load Interactives
Loading...
Neighborhood: Visualization that explores the impacts of this study on insurers at the geohash level. How much does the yield distribution change? How does that impact insurers?
Alternative data downloads:
  • climate.csv: Information about expected changes to climate.
  • tool.csv: Geohash-level simulation outcomes.
These are CSV files which can be opened in spreadsheet software. These are available under the CC-BY-NC License.
The neighborhood visualization has the following controls:
  • Esc: Exit the visualization
  • v: Change visualization (map, scatter)
  • o: Change output (yield, risk)
  • c: Change coverage level (scatter only)
  • y: Change year
  • s: Change sample (avg year, all years)
  • t: Change threshold (p < 0.05 or p < 0.10)
  • b: Disable / enable Bonferroni correction
  • f: Disable / enable significance filter
  • g: Change growing condition variable
  • m: Change month (only if growing condition var selected)
The visualization will need focus in order to recieve keyboard commands.
Some other interesting perspectives you may find:
  • Preciptiation may offer a protective benefit: an increased chirps value may be associated with less loss elevation. This would potentially agree with prior work (Sinsawat et al 2004; Marouf et al 2013).
  • Different geographies may see different outcomes with increased risk potentially concentrated in a band stretching through Iowa, Illinois, and Indiana.
  • Many of the areas without significant changes to risk may have had too little acreage growing maize and so have too small a sample size.
In addition to digging in beyond our original findings, additional insights can be found in our scientific paper (preprint) including more details about the neural network itself and its performance in different tasks.
Citations for this section
The citations for this section are as follows:
  • C. Bonferroni, “Il calcolo delle assicurazioni su gruppi di teste,” Studi in Onore del Professore Salvatore Ortu Carboni, 1935, [Online]. Available: https://www.semanticscholar.org/paper/Il-calcolo-delle-assicurazioni-su-gruppi-di-teste-Bonferroni-Bonferroni/98da9d46e4c442945bfd88db72be177e7a198fd3
  • H. B. Mann and D. R. Whitney, “On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other,” Ann. Math. Statist., vol. 18, no. 1, pp. 50–60, Mar. 1947, doi: 10.1214/aoms/1177730491.
  • K. Marouf, M. Naghavi, A. Pour-Aboughadareh, and H. Naseri rad, “Effects of drought stress on yield and yield components in maize cultivars (Zea mays L),” International Journal of Agronomy and Plant Production, vol. 4, pp. 809–812, Jan. 2013.
  • J. H. McDonald, Handbook of Biological Statistics, 3rd ed. Baltimore, Maryland: Sparky House Publishing, 2014. [Online]. Available: http://www.biostathandbook.com/#print
  • V. Sinsawat, J. Leipner, P. Stamp, and Y. Fracheboud, “Effect of heat stress on the photosynthetic apparatus in maize (Zea mays L.) grown at control or high temperature,” Environmental and Experimental Botany, vol. 52, no. 2, pp. 123–129, Oct. 2004, doi: 10.1016/j.envexpbot.2004.01.010.
  • C. Spearman, “The Proof and Measurement of Association between Two Things,” The American Journal of Psychology, vol. 15, no. 1, p. 72, Jan. 1904, doi: 10.2307/1412159.
Some definition citations are in glossary.
Data downloads for conclusion
The data downloads section are as follows:
  • climate.csv: Information about expected changes to climate.
  • tool.csv: Geohash-level simulation outcomes.
These are available under the CC-BY-NC License.

About this website

This website houses interactive exploratory tools to complement a traditional scientific paper (preprint) which offers further details.
This a collaboration between folks at the University of California Berkeley and the University of Arkansas:
  • A Samuel Pottinger: Lead author, machine learning / artifical intelligence, data visualization, interactive science. Schmidt Center for Data Science and Environment at UC Berkeley.
  • Lawson Connor: Agricultural economics and policy. Department of Agricultural Economics and Agribusiness, University of Arkansas.
  • Brookie Guzder-Williams: Machine learning / artifical intelligence. Schmidt Center for Data Science and Environment at UC Berkeley.
  • Maya Weltman-Fahs: Policy and management. Schmidt Center for Data Science and Environment at UC Berkeley.
  • Timothy Bowles: Agroecology, soil health, and regenerative agriculture. Department of Environmental Science, Policy & Management, University of California Berkeley.
Funding for the project comes from the Eric and Wendy Schmidt Center for Data Science and Environment at UC Berkeley. See also our humans.txt file.
How to cite us
We are currently in preprint. Please see our citation (CFF) file.
This project is open source with code released under the BSD License with data available under the CC-BY-NC License. The interactive components were made in Sketchingpy and can be executed outside of the browser. Other open source components used and additional details can be found in our readme.
Download data from this project
The following data are available for download:
  • climate.csv: Description of how climate variables change in the simulations.
  • export_claims.csv: Information about the claims rate under different conditions.
  • sim_hist.csv: Information about simulation-wide yield distributions under different conditions.
  • sweep_ag_all.csv: Information about sweep outcomes and model performance.
  • tool.csv: Geographically specific information about simulation outcomes at the 4 character geohash level.
  • usda_rma_sob.zip: Archive of USDA Risk Management Agency (RMA) Summary of Business (SOB) with Avro format and consistent encoding where possible. This supports some claims in our paper.
More information about these files can be found at Zenodo. Additional resources can be found in our open source repository.
By using this website, you agree to the terms / privacy. For inquires, please email us at hello@ag-adaptation-study.pub.

Settings

The following options may help some more comfortably engage with this content. When you are done configuring your preferred experience, you can return to the start of this interactive lab complementing our scientific paper (preprint).

Visualization keyboard controls

Readers who prefer to use the keyboard instead of a mouse or their finger for a touchscreen can elect to have a list of keyboard controls shown below each interactive visualization. Press escape to leave an interactive.

Tooltips

Some words displayed with dotted underline will show additional text when either hovered over, tapped / clicked on, or given focus. This may be distracting for some users and may be disabled.

Visualizations

Those who prefer not to engage with the interactive visualizations can use alternatives. This may be preferrable for those using adaptive technologies like screen readers.

Symbols and emojis

Those using adaptive technologies may want to disable symbols and emojis for a more streamlined experience.

Glossary

Actual Production History
Actual Production History. For our purposes, this will refer to the average of the last ten years of crop yields (FCIC 2020). This has a number of different roles in insurance but is often used to estimate an expected or typical yield.
APH
Abbreviation for Actual Production History.
Artificial Neural Network
A form of AI modeled after how biological brains work. In this system, there are different neurons connected to each other in a network that "learn" over time. For more informaiton see the Nature of Code (Shiffman 2024).
Blocks
Sometimes certain data can be distracting to the model or simply having a large number of variables can become overwhelming (Karanam 2021). This option lets us take out individual climate variables.
CHC-CMIP6
Climate hazards CMIP6 model which offers both historic climate data / growing conditions as well as future projections (Williams et al 2024).
chirps
Variable name referring to a model for daily precipitation (Williams et al 2024).
County average yield
We use this phrase to refer to the average yield (APH equivalent) for all fields in the county. See Plastina and Edwards (2020).
Counterfactual
A projection of future yield changes which does not assume further warming in contrast to SSP245.
County preimum rate
Based on the crop and county, we use this term to refer to how much it costs to insure one future dollar of expected crop. A value of 4% means 4 cents for each dollar insured. See Plastina and Edwards (2020).
Coverage amount
Up to what percent of APH will be covered. If one sees a yield that is only 75% of APH then the policy will pay out 75% - 70% or 5% of APH. See Plastina and Edwards (2020)
Data Science
This is the type of mathematical and computational science in which we build models like through artifical intelligence to make sense of data and predict the future (IBM).
Depth
The number of layers in the network (Brownlee 2020). Deeper networks can learn more sophisticated lessons from the data but can require more data to train (Brownlee 2019) otherwise it may memorize examples instead of learning broader trends (Brownlee 2020).
Dropout
Randomly turn off neurons while training (Srivastava et al 2014). Conceptually, this encourages finding multiple ways to use inputs in making predictions. This avoids relying on one small aspect of the data (though at the risk of disrupting learning). This "knob" turned up means neurons are disabled with higher frequency.
Geohash
We create a grid through a process called geohashing (Niemeyer 2008) where all of the US Corn Belt gets assigned to a cell. Longer geohashes mean smaller cells and, in the 4 character version, each cell is the size of about 28 by 20 kilometers (Haugen 2020).
Grower
Grower is a more general term we use to refer to farmers and producers regradless of their method of production (Mason 2024).
Insured Unit
A set of fields which are insured together in a policy (Munch 2024). For reference, this is sometimes also called a risk unit.
Insured Unit APH
APH for the set of fields in the insured unit for which a policy is being created. See FCIC (2020).
L2
A penalty on the model for very deep pathways between neurons (Oppermann 2020). Conceptually, this avoids focusing too much on small specifics in the data unless there is strong evidence for that connection. Turning this "knob" up sets a higher bar for the evidence required for those deep connections.
MAE
Abbreviation for Mean Absolute Error.
Maize
In scientific settings, we often refer to the corn crop as a whole regardless of its usage as maize.
Mean
This is simply the average like the average of the yields within the grid cell.
Mean Absolute Error
This is a simple average of the error magnitude. In other words, it indicates, on average, how far off predictions from the network are from actuals regardless of if the model was too high or too low. For example, if the model predicts that a neighborhood's yield will drop by 10% and it actually drops by 15%, the MAE would be 5%. See Acharya (2021).
Monte Carlo
A type of simulation technique that works with variables that have some uncertainty. See Kwiatkowski (2022).
MPCI
Abbreviation for Multi-Peril Crop Insurance.
Multi-Peril Crop Insurance
Covering 494 million acres (Tsiboe and Turner 2023), this is the "oldest and most common form of federal crop insurance" (Chite et al 2005).
Overfit
An overfit model has mistaken noise in the training examples as actual trends so may perform poorly when predicting outcomes for data it has not seen before. See Koehrsen (2018).
Portfolio Effect
In the context of this explorable, portfolio effect emerges where combining more fields together into an insured unit. This offers more opportunities for good outcomes in some fields to offset the negative outcomes in others. Our results find that increasing the size of the portfolio tends to decrease risk. See McClain.
Projected Price
Expected price at which the crop will sell often based on daily futures market data. See Plastina and Edwards (2020)
rhn
Variable name for overall relative humidity (Williams et al 2024).
rhx
Variable name for relative humidity peak (Williams et al 2024).
SCYM
Scalable Crop Yield Mapper (Lobell et al 2015) which is a model that allows us to estimate historic yields.
Significant
Statistically significant means unlikely to have occurred by chance alone. See Hill et al (2018)".
Std
Abbreviation for standard deviation.
Standard Deviation
This is a measure of how wide the range of a distribution is. For example, how variable yields are expected to be in a grid cell. Typically this is where we assume yields approximately follow a normal distribution within a grid cell. See Hargrave (2024).
SSP245
A future projection of climate change expecting moderate warming. See Hausfather and Peters (2020).
Subsidy Rate
What percent of the overall policy cost is covered by subsidy. A value of 55% means that the grower pays for 45% of the overall cost of the policy. See Plastina and Edwards (2020)
svp
Variable name for saturation vapor pressure (Williams et al 2024).
Sweep
Asking the computer to try different combinations of options or hyper-parameters to see how different model configurations perform (Joseph 2018.
tmax
Variable name for maximum temperature (Williams et al 2024).
Test Set
This is a set of data we save until the very end. This lets us estimate how our model will perform in data it hasn't seen before. It helps us set expectations for future performance (Shah 2017).
tmin
Variable name for minimum temperature (Williams et al 2024).
Training Set
This is the set of data / examples from which the model learns. These will change the connections between neurons (Shah 2017).
Underfit
An underfit models fails to capture all of the nuances of the data and can conceptually be thought of as "oversimplified" though the amount of learning may be limited by the data sample available. See Koehrsen (2018).
US Corn Belt
A multi-state area within the United States with substantial corn growing activity (NCEI 2024).
Validation Set
This is a set of data we don't show the model while it learns. It lets us compare competing models if we are experimenting with different options in construting our neural network (Shah 2017).
vpd
Variable name for vapor pressure deficit (Williams et al 2024).
wbgt
Variable name for wet bulb globe temperature (Williams et al 2024).
YP
Yield Protection. This is the coverage option within MPCI that we will simulate. That said, climate effects seen here may impact other options available to growers such as Revenue Proection. Regardless, Yield Protection typically guarantees yield itself up to 75% of an APH (Osiensky 2021).
508h
Process by which new insurance products can be suggested. This could include supplemental policies covering more than 75% of yields (Schnitkey et al 2023) though typically programs cannot cover more than 85% as we will see in a moment.
Citations for glossary
The citations for this glossary section are as follows:
  • J. Brownlee, “How Much Training Data is Required for Machine Learning?” Guiding Tech Media, May 23, 2019. [Online]. Available: https://machinelearningmastery.com/much-training-data-required-machine-learning/
  • J. Brownlee, “How to Control Neural Network Model Capacity With Nodes and Layers.” Guiding Tech Media, Aug. 25, 2020. [Online]. Available: https://machinelearningmastery.com/how-to-control-neural-network-model-capacity-with-nodes-and-layers/
  • FCIC, “Common Crop Insurance Policy 21.1-BR.” United States Department of Agriculture, Nov. 2020. [Online]. Available: https://www.rma.usda.gov/-/media/RMA/Publications/Risk-Management-Publications/rma_glossary.ashx?la=en
  • M. Hargrave, “Standard Deviation Formula and Uses vs. Variance.” Investopedia, May 23, 2024. [Online]. Available: https://www.investopedia.com/terms/s/standarddeviation.asp
  • B. Haugen, “Geohash Size Variation by Latitude.” Mar. 14, 2020. [Online]. Available: https://bhaugen.com/blog/geohash-sizes/
  • Z. Hausfather and G. Peters, “Emissions – the ‘business as usual’ story is misleading.” Nature, Jan. 29, 2020. [Online]. Available: https://www.nature.com/articles/d41586-020-00177-3
  • A. Hill et al., “Test Statistics: Crash Course Statistics #26.” CrashCourse, Aug. 08, 2018. [Online]. Available: https://www.youtube.com/watch?v=QZ7kgmhdIwA
  • IBM, “What is data science?” IBM. [Online]. Available: https://www.ibm.com/topics/data-science
  • R. Joseph, “Grid Search for Model Tuning.” Towards Data Science, Dec. 29, 2018. [Online]. Available: https://towardsdatascience.com/grid-search-for-model-tuning-3319b259367e
  • S. Karanam, “Curse of Dimensionality — A ‘Curse’ to Machine Learning.” Towards Data Science, Aug. 10, 2021. [Online]. Available: https://towardsdatascience.com/curse-of-dimensionality-a-curse-to-machine-learning-c122ee33bfeb
  • W. Koehrsen, “Overfitting vs. Underfitting: A Complete Example.” Towards Data Science, Jan. 28, 2018. [Online]. Available: https://towardsdatascience.com/overfitting-vs-underfitting-a-complete-example-d05dd7e19765
  • R. Kwiatkowski, “Monte Carlo Simulation — a practical guide.” Jan. 30, 2022. [Online]. Available: https://towardsdatascience.com/monte-carlo-simulation-a-practical-guide-85da45597f0e
  • D. B. Lobell, D. Thau, C. Seifert, E. Engle, and B. Little, “A scalable satellite-based crop yield mapper,” Remote Sensing of Environment, vol. 164, pp. 324–333, Jul. 2015, doi: 10.1016/j.rse.2015.04.021.
  • D. Mason, “0:17 / 0:51 Grower or Farmer - Are We Witnessing a Change In Title?” YouTube, Jun. 01, 2024. [Online]. Available: https://www.youtube.com/watch?v=-dSVV2nmKBI
  • A. McClain, “Definition of the Portfolio Effect.” Sapling. [Online]. Available: https://www.sapling.com/6940690/definition-portfolio-effect
  • D. Munch, “Crop Insurance 101: The Basics.” Farm Bureau, May 02, 2024. [Online]. Available: https://www.fb.org/market-intel/crop-insurance-101-the-basics
  • NCEI, “Corn Belt.” NOAA, 2024. [Online]. Available: https://www.ncei.noaa.gov/access/monitoring/reference-maps/corn-belt
  • G. Niemeyer, “geohash.org is public!” Labix Blog, Feb. 26, 2008. [Online]. Available: https://web.archive.org/web/20080305102941/http://blog.labix.org/2008/02/26/geohashorg-is-public/
  • A. Oppermann, “Regularization in Deep Learning — L1, L2, and Dropout.” Towards Data Science, Feb. 19, 2020. [Online]. Available: https://towardsdatascience.com/regularization-in-deep-learning-l1-l2-and-dropout-377e75acc036
  • J. Osiensky, “Understanding the Price Side of Revenue Crop Insurance.” Washington State University, Aug. 19, 2021. [Online]. Available: https://smallgrains.wsu.edu/understanding-the-price-side-of-revenue-crop-insurance/
  • A. Plastina and W. Edwards, “Yield Protection Crop Insurance.” Iowa State University Extension and Outreach, Dec. 2020. [Online]. Available: https://www.extension.iastate.edu/agdm/crops/html/a1-52.html
  • G. Schnitkey, N. Paulson, and C. Zulauf, “Privately Developed Crop Insurance Products and the Next Farm Bill.” Unviersity of Illinois, Mar. 14, 2023. [Online]. Available: https://farmdocdaily.illinois.edu/2023/03/privately-developed-crop-insurance-products-and-the-next-farm-bill.html
  • T. Shah, “About Train, Validation and Test Sets in Machine Learning.” Towards Data Science, Dec. 06, 2017. [Online]. Available: https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7
  • D. Shiffman, The nature of code: simulating natural systems with JavaScript. San Francisco: No Starch Press, 2024.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929–1958, 2014.
  • F. Tsiboe and D. Turner, “Crop Insurance at a Glance.” Economic Research Service, USDA, May 03, 2023. [Online]. Available: https://www.ers.usda.gov/topics/farm-practices-management/risk-management/crop-insurance-at-a-glance/
  • E. Williams, C. Funk, P. Peterson, and C. Tuholske, “High resolution climate change observations and projections for the evaluation of heat-related extremes,” Sci Data, vol. 11, no. 1, p. 261, Mar. 2024, doi: 10.1038/s41597-024-03074-w.