top of page

Social Disparity in Impacts of Climate Disasters in the United States

Regression: Overview

Regression is a method for estimating the relationship between one or more independent variables and a dependent variable. In machine learning, regression can be used to extract insights and make predictions based on the relationship between independent and dependent variables. There are several regression models, including: 

linear-regression.png

Linear regression model with actual values in red, regression line in blue, and errors as the black lines (Image credit: Stanford Stats 202)

There are several regression models, including:

  • Linear regression: Establishes a linear relationship between the independent and dependent variables. The model aims to find the best fit line based on some mathematical criteria. For example, ordinary least squares determines the line that minimizes the sum of squared differences between the predicted and actual values. 

  • Multiple regression: Extending linear regression, multiple regression is used when there is more than 1 independent variable, and the resulting model is a hyperplane that is fit to minimize error.

  • Polynomial Regression: Polynomial regression fits a polynomial equation for the data and is applicable when there is a nonlinear relationship between variables

  • Logistic Regression: Unlike the models above, logistic regression is used for binary classification. It models a binary outcome using a sigmoid function, where the log-odds for the event is a linear combination of the independent variables. 

logistic-regression.png

Logistic regression model (Image Credit: GraphPad)

 

Some limitations of regression models are:

  • Assumption of mathematical relationship: Regression models assume there is some mathematical relationship (i.e. linearity), between the independent and dependent variables. If this relationship is not true, then the model’s predictions may be inaccurate. 

  • Limited to numerical data: Regression analysis methods such as linear and polynomial regression are only suitable for numerical data. Techniques such as logistic regression can be applied for binary classification.

Correlation does not imply causation: Regression analysis identifies correlations between variables, not causation. One must be careful when communicating the results.

When applying regression models, it is important to be aware of the influence of outliers, as outliers or extreme values in the data can strongly influence the regression model and make it less accurate. Another concern is overfitting and underfitting. Overfitting occurs when the model is too complex and fits too closely to the training data, while underfitting occurs when the model is too simple to capture the relationship, making it perform poorly when generalizing to new data in either case. For example, overfitting is more likely when a higher degree polynomial is used for the model, and underfitting is likely when a linear or lower degree polynomial is used.

polynomial.png

Under, good, and overfitting with polynomial regression (Image credit: Animesh Agarwal)

In this section of the project, linear regression is applied to establish relationships between social vulnerability and climate risk by county. Several CDC social vulnerability indexes, which are social factors that make a population more vulnerable to crises, are used as the independent variables, whereas FEMA’s climate disaster risk index is used as the independent variable. In this case, linear regression is not used to make predictions, but rather, to understand correlations between climate risk and social vulnerability, for example, if counties with greater African American populations also experience higher risk of heat waves. This can reveal insights on differences in distribution of climate impacts based on geographical location.

Data preparation

Linear regression requires 2 continuous quantitative data, paired between independent and dependent variables. I constructed pairs of independent and dependent variables, where the dependent variables included socioeconomic variables of percentage of people below 150% of the poverty estimate, percentage of people who are African-American, percentage of people who are Hispanic, and percentage of people who are Asian. The dependent variables were total risk (i.e. aggregated risk of 18 climate disasters), and risk of heat waves, from FEMA’s National Risk Index (NRI). I merged these 2 dimensions from the two datasets based on the county ID. Below is the NRI data and SVI data, before preparation. 

nri.png
svi.png

Raw SVI Data

The following is the processed data, with 3231 data vectors each corresponding to a U.S. county. The 4 independent socioeconomic variables and 2 dependent climate risk variables are all included in this dataset, but during the regression analysis, I modeled only 1 independent variable with 1 dependent variable at a time. The prepared data can be found here.

cleaned-data.png

C0de

All code for data preparation and regression in Python can be found here.

Results

Each pair of independent and dependent variables was modeled using linear regression. Below are all 8 regression models, visualized and with the regression line shown.

pov-total.png
pov-hwav.png
afam-total.png
afam-hwav.png
hisp-total.png
asian-total.png
hisp-hwav.png
asian-hwav.png

All regression models have a positive value for the slope, aside from heat wave risks vs. percent Hispanic, which indicates that for those models, there may be a positive correlation between the socioeconomic vulnerability variables and climate risks. The largest slope values for the regression models represent relationships with the largest increase in climate risk for percent increase in the population group. These are (in order), total climate risk vs. percentage Asian, heat wave risk vs. percentage Asian (these may be attributed to the smaller overall Asian American population), heat wave risk vs. percentage African American, total climate risk vs. percentage African American, and total climate risk vs. percentage Hispanic. Based on the plots of the actual data values, it seems the data may not be well-estimated with a linear model, although the positive relationship between socioeconomic variables with climate risk variables can still be seen.

Conclusion

In this section, linear regression was performed with combinations of 4 socioeconomic independent variables (percentage of people below 150% of the poverty estimate, percentage of people who are African-American, percentage of people who are Hispanic, and percentage of people who are Asian) and 2 climate risk dependent variables (total climate risk and heat wave risk). 7 of the regression models showed a positive correlation between the independent and dependent variables. The regression models suggest that the largest increases in climate risk for percent population increase are: total climate risk vs. percentage Asian, heat wave risk vs. percentage Asian (these may be attributed to the smaller overall Asian American population), heat wave risk vs. percentage African American, total climate risk vs. percentage African American, and total climate risk vs. percentage Hispanic. Surprisingly, the heat wave risk vs. percentage had a slope of -0.02 which indicates that these two variables are uncorrelated. Socioeconomic variables can be normalized in future analyses to compare impacts among different demographic groups. Finally, these findings are assuming that the independent and dependent variables have a linear relationship, but this may not be true based on visual inspection of the plots, this may not be true. Nevertheless, the positive relationships between several of the social vulnerability variables and the climate risk variables is corroborated by the visualizations.

bottom of page