- Names: Marcela Violeta Lauria
- Matrikelnummer: 12146202
01 - An energy balance model with hysteresisΒΆ
Based on the code we wrote in week 07, code the following extension. For this exercise it's OK to copy-paste my solutions and start from there. Only copy the necessary (no useless code)!
The planetary albedo $\alpha$ is in fact changing with climate change. As the temperature drops, sea-ice and ice sheets are extending (increasing the albedo). Inversely, the albedo decreases as temperature rises. The planetary albedo of our simple energy balance model follows the following equation:
$$ \alpha = \begin{cases} 0.3,& \text{if } T \gt 280\\ 0.7,& \text{if } T \lt 250\\ a T + b, & \text{otherwise} \end{cases} $$
01-01: Compute the parameters $a$ and $b$ so that the equation is continuous at T=250K and T=280K.
01-02: Now write a function called alpha_from_temperature
which accepts a single positional parameter T
as input (a scalar) and returns the corresponding albedo. Test your function using doctests to make sure that it complies to the instructions above.
01-03: Adapt the existing code from week 07 to write a function called temperature_change_with_hysteresis
which accepts t0
(the starting temperature in K), n_years
(the number of simulation years) as positional arguments and tau
(the atmosphere transmissivity) as keyword argument (default value 0.611). Verify that:
- the stabilization temperature with
t0 = 292
and default tau is approximately 288K - the stabilization temperature with
t0 = 265
and default tau is approximately 233K
01-04: Realize a total of N simulations with starting temperatures regularly spaced between t0
=206K, and t0
=318K and plot them on a single plot for n_years
=50. The plot should look somewhat similar to this example for N=21
.
Bonus: only if you want (and if time permits), you can try to increase N and add colors to your plot to create a graph similar to this one.
1 2 3 4 | import numpy as np import pandas as pd import matplotlib.pyplot as plt import math |
- 01- 1 :
1 2 3 4 5 | A = np.array([[250,1],[280,1]]) Y = np.array ([0.7,0.3]) A_inv= np.linalg.inv(A) X = A_inv @ Y X |
array([-0.01333333, 4.03333333])
- 01-2 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | def alpha_from_temperature(T=206): """this function returns an alpha value according to our input in temperature t if t0<250 , alpha >=0.7 if 250 <= t0 <= 280 alpha = -0.01332 * t0 + 4.03 if 280< t0 , alpha <= 0.3 Parameters ---------- t : float, optional temperature. per default t0 = 206 Returns ------- alpha (float) Examples -------- >>> alpha_from_temperature (240) 0.7 >>> alpha_from_temperature (290) 0.3 >>> alpha_from_temperature (250) 0.7000000000000006 """ if T>=0 : if T < 250: alpha = 0.7 elif T > 280: alpha = 0.3 else : alpha = X[0]*T + X[1] else : raise Valuerror ('the number you entered not corrspond to the valid range. please try with a value above 0') return alpha import doctest doctest.testmod() |
TestResults(failed=0, attempted=3)
- 01-3 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | def asr(alpha=0.3): """Absorbed Solar Radiation (W m-2). Parameters ---------- alpha : float, optional the planetary albedo Returns ------- the asr (float) Examples -------- >>> print(f'{asr():.2f}') 238.35 >>> print(f'{asr(alpha=0.32):.2f}') 231.54 """ s0 = 1362 asr = (1 - alpha) * s0 / 4 return asr # Testing import doctest doctest.testmod() |
TestResults(failed=0, attempted=5)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | def olr(t, tau=0.611): """Outgoing Longwave Radiation (W m-2). Parameters ---------- t : float the atmosphere temperature (K) tau : float, optional the atmosphere transmissivity (-) Returns ------- the olr (float) Examples -------- >>> print(f'{olr(288):.2f}') 238.34 >>> print(f'{olr(288, tau=0.63):.2f}') 245.75 """ sigma = 5.67E-8 return sigma * tau * t**4 # Testing import doctest doctest.testmod() |
TestResults(failed=0, attempted=7)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | def temperature_change_with_hysteresis(t0, n_years, tau=0.611): """Temperature change scenario after change of transmissivity. Parameters ---------- t0 : float the starting temperature (K) n_years : int the number of simulation years tau : float, optional the atmosphere transmissivity (-) Returns ------- years, temperature Examples -------- >>> y, t = temperature_change_with_hysteresis(265, 50 ) >>> t array([265. , 259.58391588, 258.66610507, 257.59424598, 256.34014517, 254.8696467 , 253.1410987 , 251.10333489, 248.6929872 , 246.29868862, 244.30096296, 242.62539287, 241.21400189, 240.02093196, 239.00944741, 238.14980463, 237.41770301, 236.79313638, 236.25952649, 235.80305911, 235.41216846, 235.07713183, 234.78974737, 234.54307551, 234.33122963, 234.14920531, 233.99274004, 233.85819734, 233.74247049, 233.64290211, 233.55721687, 233.4834648 , 233.41997348, 233.3653076 , 233.31823464, 233.27769569, 233.24278063, 233.21270691, 233.18680148, 233.16448532, 233.14526017, 233.12869718, 233.11442718, 233.10213235, 233.09153899, 233.08241143, 233.07454668, 233.06776992, 233.06193054, 233.05689882, 233.05256301]) """ C = 4.0e+08 dt = 60 * 60 * 24 * 365 alpha = alpha_from_temperature() years = np.arange(n_years + 1) temperature = np.zeros(n_years + 1) temperature[0] = t0 for i in range(n_years): temperature[i + 1] = temperature[i] + dt / C * (asr(alpha =alpha) - olr(temperature[i], tau=tau)) alpha = alpha_from_temperature(temperature[i + 1]) return years, temperature # Testing import doctest doctest.testmod() |
TestResults(failed=0, attempted=9)
- 01-4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | N = 21 n_years = 50 t0_values = np.linspace(206, 318, N) plt.figure(figsize=(10, 6)) for i in range(N): t0 = t0_values[i] years, temperature = temperature_change_with_hysteresis(t0, n_years) plt.plot(years, temperature) plt.xlabel("Years") plt.grid() plt.ylabel("Temperature (K)") plt.title(f"Temperature Change with Hysteresis for {n_years} Years") plt.show() |
02 - Weather station data filesΒΆ
I downloaded 10 min data from the recently launched ZAMG data hub. The data file contains selected parameters from the "INNSBRUCK-FLUGPLATZ (ID: 11804)" weather station.
You can download the data from the following links (right-click + "Save as..."):
- station data
- parameter metadata
- station list from the ZAMG (in a better format than last time)
Let me open the data for you and display its content:
1 2 | df = pd.read_csv('INNSBRUCK-FLUGPLATZ_Datensatz_20150101_20211231.txt', index_col=1, parse_dates=True) df = df.drop('station', axis=1) |
02-01: after reading the documentation of the respective functions (and maybe try a few things yourself), explain in plain sentences:
- what am I asking pandas to do with the
index_col=1, parse_dates=True
keyword arguments? Why am I doing this?
with the 'index_col=1' you are setting the index of a dataframe. the moment when the data was stored, that is the second column of data in the file, we are setting it as our index in our 'table' of a Data Frame. the 'parse_dates=True'convert the dates in the column, to timestamp objects. it works as a boolean, then if true, it try to parse the index
- what am I asking pandas to do with
.drop()
? Whyaxis=1
?
I think is taking away the first column related to a number asociated with the station where the data is colected. in this case is the same station in all the file to the firs column is not interesting.
Now let me do something else from you:
1 2 | dfmeta = pd.read_csv('ZEHNMIN Parameter-Metadaten.txt', index_col=0) dfmeta.loc[df.columns] |
Kurzbeschreibung | Beschreibung | Einheit | |
---|---|---|---|
DD | Windrichtung | Windrichtung, vektorielles Mittel ΓΌber 10 Minuten | Β° |
FF | vektorielle Windgeschwindigkeit | Windgeschwindigkeit, vektorielles Mittel ΓΌber ... | m/s |
GSX | Globalstrahlung | Globalstrahlung, arithmetisches Mittel ΓΌber 10... | W/mΒ² |
P | Luftdruck | Luftdruck, Basiswert zur Minute10 | hPa |
RF | Relative Feuchte | Relative Luftfeuchte, Basiswert zur Minute10 | % |
RR | Niederschlag | 10 Minuten Summe des Niederschlags, Summe der ... | mm |
SO | Sonnenscheindauer | Sonnenscheindauer, Sekundensumme ΓΌber 10 Minuten | s |
TB1 | Erdbodentemperatur in 10cm Tiefe | Erdbodentemperatur in 10cm Tiefe, Basiswert zu... | Β°C |
TB2 | Erdbodentemperatur in 20cm Tiefe | Erdbodentemperatur in 20cm Tiefe, Basiswert zu... | Β°C |
TB3 | Erdbodentemperatur in 50cm Tiefe | Erdbodentemperatur in 50cm Tiefe, Basiswert zu... | Β°C |
TL | Lufttemperatur in 2m | Lufttemperatur in 2m HΓΆhe, Basiswert zur Minute10 | Β°C |
TP | Taupunkt | Taupunktstemperatur, Basiswert zur Minute10 | Β°C |
1 | df.columns |
Index(['DD', 'FF', 'GSX', 'P', 'RF', 'RR', 'SO', 'TB1', 'TB2', 'TB3', 'TL', 'TP'], dtype='object')
02-02: again, explain in plain sentences what the dfmeta.loc[df.columns]
is doing, and why it works that way.
'df.columns' are the names of the columns like above, and .loc gives us the entire row of an index that we choose is brackets [], in this case are all the rows because of the [df.columns] so we can see all the Table of the DataFrame
Finally, let me do a last step for you before you start coding:
1 | dfh = df.resample('H').mean() |
02-03: explore the dfh
dataframe. Explain, in plain words, what the purpose of .resample('H')
followed by mean()
is. Explain what .resample('H').max()
and .resample('H').sum()
would do.
.resample('H').mean()
so before we had 368208 rows Γ 12 columns because the time was taken every 10 minutes. now we have the data every hour, with 61368 rows Γ 12 columns, and the data is the media of the values that are inside that hour. this way gets smaller the amount of rows to 1/6 more or less
.resample('H').max() takes the maximum value from the values between the hours and outputs it in the Table of Dataframe
.resample('H').sum() takes all the values per hour, it adds them and outputs the value of the sum of them
02-04: Using np.allclose
, make sure that the average of the first hour (that you'll compute yourself from df
) is indeed equal to the first row of dfh
. Now, two variables in the dataframe have units that aren't suitable for averaging. Please convert the following variables to the correct units:
RR
needs to be converted from the average of 10 min sums to mm/hSO
needs to be converted from the average of 10 min sums to s/h
so, I create a new dataframe called dfmeta_h wich is correlative with our dataframe dfh, where we can find RR and SO in the correct units.
1 | dfmeta_h = dfmeta.replace({'Sonnenscheindauer, Sekundensumme ΓΌber 10 Minuten':'Sonnenscheindauer, Sekundensumme ΓΌber 1 Stunde','10 Minuten Summe des Niederschlags, Summe der Basiswerte ΓΌber 10 Minuten':'1 Stunde Summe des Niederschlags, Summe der Basiswerte ΓΌber 1 Stunde'}) |
1 2 | #for example dfmeta_h.loc['RR'][1] |
'1 Stunde Summe des Niederschlags, Summe der Basiswerte ΓΌber 1 Stunde'
in the nexts lines I checked that in the data frames df and dfh the values are similar
1 | np.allclose(dfh ['RR'][0:1], df['RR'][0:6]) |
True
1 | np.allclose(dfh ['SO'][0:1], df['SO'][0:6]) |
True
1 2 | #I change your variable dfh dfh = df.resample('H').sum() |
From now on, we will use the hourly data only (and further aggregations when necessary). The 10 mins data are great but require a little bit more of pandas kung fu (the chinese term, not the sport) to be used efficiently.
Spend some time exploring the dfh
dataframe we just created. What time period does it cover? What variables does it contain?
2015-01-01 00:00:00 2021-12-31 23:00:00 it goes from the first of january 2015 to 31 of december from 2021 it contains 12 + 1 columns (DD FF GSX P RF RR SO TB1 TB2 TB3 TL TP, plus the the index column of regarding the time when the data was stored)
Note on pandas: all the exercises below can be done with or without pandas. Each question can be answered with very few lines of code (often one or two) with pandas, and I recommend to use it as much as possible. If you want, you can always use numpy in case of doubt: you can access the data as a numpy array with: df[column_name].values
.
03 - PrecipitationΒΆ
In this section, we will focus on precipitation only.
03-01: Compute the average annual precipitation (m/year) over the 7-year period.
03-02: What is the smallest non-zero precipitation measured at the station? What is the maximum hourly precipitation measured at the station? When did this occur?
03-03: Plot a histogram of hourly precipitation, with bins of size 0.2 mm/h, starting at 0.1 mm/h and ending at 25 mm/h. Plot the same data, but this time with a logarithmic y-axis. Compute the 99th percentile (or quantile) of hourly precipitation.
03-04: Compute daily sums (mm/d) of precipitation (tip: use .resample
again). Compute the average number or rain days per year in Innsbruck (a "rain day" is a day with at least 0.1 mm / d of measured precipitation).
03-05: Now select (subset) the daily dataframe to keep only only daily data in the months of December, January, February (DFJ). To do this, note that dfh.index.month
exists and can be used to subset the data efficiently. Compute the average precipitation in DJF (mm / d), and the average number of rainy days in DJF. Repeat with the months of June, July, August (JJA).
03-06: Repeat the DJF and JJA subsetting, but this time with hourly data. Count the total number of times that hourly precipitation in DJF is above the 99th percentile computed in exercise 03-03. Repeat with JJA.
03-07: Compute and plot the average daily cycle of hourly precipitation in DFJ and JJA. I expect a plot similar to this example. To compute the daily cycle, I recommend to combine two very useful tools. First, start by noticing that ds.index.hour
exists and can be used to categorize data. Then, note that df.groupby
exists and can be used exactly for that (documentation).
3-1
1 2 3 | dfhy = df.resample('Y').sum() dfhy_RR = dfhy['RR'] print(round(dfhy_RR.mean()), "mm/year") |
921 mm/year
3-2
The smallest non zero value is 0.1 i try with different methods but then, looking at the data, the minimum value was not possible to be smaller than 0.1 and that's how I came finally to the response.
1 2 | min_RR_value= dfh.loc[(dfh['RR'])== 0.1] min_RR_value |
DD | FF | GSX | P | RF | RR | SO | TB1 | TB2 | TB3 | TL | TP | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
time | ||||||||||||
2015-01-02 16:00:00 | 1505.0 | 5.7 | 0.0 | 5772.3 | 562.0 | 0.1 | 0.0 | 18.6 | 21.6 | 30.6 | 6.4 | 0.4 |
2015-01-11 10:00:00 | 1617.0 | 77.8 | 623.0 | 5705.4 | 309.0 | 0.1 | 0.0 | 17.4 | 16.0 | 22.2 | 37.6 | -18.3 |
2015-01-14 16:00:00 | 989.0 | 10.8 | 0.0 | 5696.9 | 550.0 | 0.1 | 0.0 | 12.6 | 13.8 | 22.8 | 21.9 | 13.9 |
2015-01-18 04:00:00 | 703.0 | 7.0 | 0.0 | 5691.7 | 594.0 | 0.1 | 0.0 | 18.0 | 19.8 | 24.0 | 2.1 | 0.7 |
2015-01-24 04:00:00 | 556.0 | 3.5 | 0.0 | 5707.9 | 558.0 | 0.1 | 0.0 | 10.2 | 12.6 | 20.4 | 1.2 | -5.4 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
2021-12-13 05:00:00 | 1087.0 | 3.1 | 0.0 | 5761.7 | 577.0 | 0.1 | 0.0 | 15.6 | 22.8 | 27.6 | 2.3 | -1.0 |
2021-12-13 06:00:00 | 1624.0 | 5.4 | 0.0 | 5763.0 | 580.0 | 0.1 | 0.0 | 15.6 | 22.8 | 27.2 | 2.9 | -0.3 |
2021-12-13 07:00:00 | 1544.0 | 3.7 | 41.0 | 5766.1 | 582.0 | 0.1 | 0.0 | 15.6 | 22.8 | 27.0 | 3.2 | 0.4 |
2021-12-30 09:00:00 | 1594.0 | 3.9 | 532.0 | 5734.6 | 586.0 | 0.1 | 21.0 | 13.2 | 16.0 | 18.6 | 27.5 | 25.4 |
2021-12-30 11:00:00 | 1351.0 | 4.5 | 706.0 | 5731.4 | 568.0 | 0.1 | 489.0 | 13.8 | 16.2 | 18.6 | 37.8 | 33.2 |
1653 rows Γ 12 columns
Maximum hourly precipitation measured at the station is 22.2 mm, and was on september the 16 of 2021
1 | dfh['RR'].max() |
22.2
1 2 | max_RR_value= dfh.loc[(dfh['RR'])== 22.2] max_RR_value |
DD | FF | GSX | P | RF | RR | SO | TB1 | TB2 | TB3 | TL | TP | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
time | ||||||||||||
2021-09-16 22:00:00 | 801.0 | 8.4 | 0.0 | 5692.2 | 579.0 | 22.2 | 0.0 | 115.4 | 114.6 | 107.4 | 95.0 | 91.4 |
3-3
1 2 3 4 | ax = dfh.plot.hist( column = ['RR'],bins = np.arange(0.1, 25, 0.2), color = 'lime') ax.set_ylabel('frecuency (hours)') ax.set_xlabel('precipitation [mm/h]') ax.set_title('Distribution of hourly precipitation (what is more frecuent to happens)') |
Text(0.5, 1.0, 'Distribution of hourly precipitation (what is more frecuent to happens)')
1 2 3 4 5 6 7 8 9 | ax = dfh.plot.hist( column = ['RR'],bins = np.arange(0.1, 25, 0.2), color = 'aqua') plt.yscale('log') ax.set_ylabel('hours') ax.set_xlabel('precipitation [mm/h]') ax.set_title('hourly precipitation') a = np.sort(dfh['RR'].values) cuant_099 = a[np.int64(np.round(0.99*a.size))] print('The 99th percentile of hourly precipitation is:', cuant_099) |
The 99th percentile of hourly precipitation is: 2.4
3-4
daily sums (mm/d) of precipitation
1 2 | dfd = df.resample('D').sum() dfd_RR = dfd['RR'] |
1 2 3 4 5 6 7 8 9 10 11 | N = dfd_RR count = 0 for n in N : if n < 0.1 : continue elif n >= 0.1 : count = count +1 print('average of rainy days per year : ',round(count/7)) |
average of rainy days per year : 170
3-5
the average precipitation in DJF and JJA(mm / d), and the average number of rainy days in DJF and JJA
winter and summer
1 2 | dfd_RR = dfd['RR'] dfd['rainyday'] = dfd['RR']>0 # this outputs boolean |
1 2 3 4 5 6 7 8 9 10 11 | dfd_DJF = dfd[dfd.index.month.isin([12,1,2])] avg_prec_DJF = np.mean(dfd_DJF['RR']) avg_rain_days_DJF = np.mean(dfd_DJF['rainyday'].resample('Y').sum()) dfd_JJA = dfd[dfd.index.month.isin([6,7,8])] avg_prec_JJA = np.mean(dfd_JJA['RR']) avg_rain_days_JJA = np.mean(dfd_JJA['rainyday'].resample('Y').sum()) print ('average of mm per day precipitated in WINTER (D,J,F):',avg_prec_DJF, 'mm/day') print ('average of mm per day precipitated in SUMMER (J,J,A): ',avg_prec_JJA, 'mm/day') print ('average of rainydays in WINTER (D,J,F):',avg_rain_days_DJF, 'mm/day') print ('average of rainydays in SuMMER (D,J,F):',avg_rain_days_JJA, 'mm/day') |
average of mm per day precipitated in WINTER (D,J,F): 1.7726265822784812 mm/day average of mm per day precipitated in SUMMER (J,J,A): 4.024844720496894 mm/day average of rainydays in WINTER (D,J,F): 37.57142857142857 mm/day average of rainydays in SuMMER (D,J,F): 53.142857142857146 mm/day
1 |
3-6
with hourly data
1 2 3 4 5 6 7 | dfh_DJF = dfh[dfh.index.month.isin([12,1,2])] above99_DJF = np.sum(dfh_DJF['RR']>cuant_099) dfh_JJA = dfh[dfh.index.month.isin([6,7,8])] above99_JJA = np.sum(dfh_JJA['RR']>cuant_099) print ('total number of times in WINTER that hourly precipitation in DJF is above the 99th percentile:',above99_DJF) print ('total number of times in SUMMER that hourly precipitation in DJF is above the 99th percentile:',above99_JJA) |
total number of times in WINTER that hourly precipitation in DJF is above the 99th percentile: 58 total number of times in SUMMER that hourly precipitation in DJF is above the 99th percentile: 298
3-7
the average daily cycle of hourly precipitation in DFJ and JJA
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | hours = np.arange(0,24) h_prec_av_DJF = [] h_prec_av_JJA = [] for i in hours: h_prec_av_DJF.append(np.mean(dfh_DJF['RR'][dfh_DJF.index.hour == i])) h_prec_av_DJF = np.array(h_prec_av_DJF) for i in hours: h_prec_av_JJA.append(np.mean(dfh_JJA['RR'][dfh_JJA.index.hour == i])) h_prec_av_JJA = np.array(h_prec_av_JJA) fig, ax = plt.subplots() ax.plot(hours, h_prec_av_DJF,color = 'blueviolet' , label='WINTER') ax.plot(hours, h_prec_av_JJA, color ='orangered',label='SUMMER') ax.set_xlabel('Hours') ax.set_ylabel('pricipitation (mm/h)') ax.legend() ax.set_title('Daily cycle of precipitation in Innsbruck in average 2015-2021') |
Text(0.5, 1.0, 'Daily cycle of precipitation in Innsbruck in average 2015-2021')
04 - A few other variablesΒΆ
In this section, we will continue to analyze the weather station data.
04-01: Verify that the three soil temperatures have approximately the same average value over the entire period. Now plot the three soil temperature timeseries in hourly resolution over the course of the year of 2020 (example). Repeat the plot with the month of may 2020.
04-02: Plot the average daily cycle of all three soil temperatures.
04-03: Compute the difference (in Β°C) between the air temperature and the dewpoint temperature. Now plot this difference on a scatter plot (x-axis: relative humidity, y-axis: temperature difference).
1 2 3 4 5 6 | dfd_TB1 = dfd['TB1'] dfd_TB2 = dfd['TB2'] dfd_TB3 = dfd['TB3'] print('verification of averages of soil temperatures values over the entire period ') print('for TB1 :',round(dfd_TB1.mean()),'/for TB2 :',round(dfd_TB2.mean()),'/for TB3 :',round(dfd_TB3.mean())) |
verification of averages of soil temperatures values over the entire period for TB1 : 1623 /for TB2 : 1656 /for TB3 : 1638
1 2 3 4 5 6 7 8 9 10 11 12 | dfh_TB1 = dfh['TB1'] dfh_TB2 = dfh['TB2'] dfh_TB3 = dfh['TB3'] dfh_TB1_2020 =dfh_TB1.loc['2020'] dfh_TB2_2020 =dfh_TB2.loc['2020'] dfh_TB3_2020 =dfh_TB3.loc['2020'] fig, axs = plt.subplots() dfh_TB1_2020.plot(ax = axs, label=' TB1') dfh_TB2_2020.plot(ax = axs, label= 'TB2') dfh_TB3_2020.plot(ax = axs, label= 'TB3') axs.set_ylabel('Temperature (Β°C)') axs.set_title('Soil temperatures 2020') |
Text(0.5, 1.0, 'Soil temperatures 2020')
1 2 3 4 5 6 7 8 9 10 11 12 | dfh_TB1 = dfh['TB1'] dfh_TB2 = dfh['TB2'] dfh_TB3 = dfh['TB3'] dfh_TB1_2020 =dfh_TB1.loc['2020-05'] dfh_TB2_2020 =dfh_TB2.loc['2020-05'] dfh_TB3_2020 =dfh_TB3.loc['2020-05'] fig, axs = plt.subplots() dfh_TB1_2020.plot(ax = axs) dfh_TB2_2020.plot(ax = axs) dfh_TB3_2020.plot(ax = axs) axs.set_ylabel('Temperature (Β°C)') axs.set_title('Soil temperatures May 2020') |
Text(0.5, 1.0, 'Soil temperatures May 2020')
4-2
average daily cycle of all three soil temperatures
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | h_a_TB1 = [] h_a_TB2 =[] h_a_TB3 = [] for i in hours: h_a_TB1.append(np.mean(dfh['TB1'][dfh.index.hour == i])) h_a_TB1 = np.array(h_a_TB1) for i in hours: h_a_TB2.append(np.mean(dfh['TB2'][dfh.index.hour == i])) h_a_TB2 = np.array(h_a_TB2) for i in hours: h_a_TB3.append(np.mean(dfh['TB3'][dfh.index.hour == i])) h_a_TB3 = np.array(h_a_TB3) fig, ax = plt.subplots() ax.plot(hours, h_a_TB1, label='TB1', color = 'lime') ax.plot(hours, h_a_TB2, label='TB2', color = 'blueviolet') ax.plot(hours, h_a_TB3, label='TB3', color ='yellow') ax.set_xlabel('hours') ax.set_ylabel('soil temperatureΒ°C') ax.set_title(' daily cycle of soil temperatures (2015-2021)') ax.legend(); |
4-3
difference (in Β°C) between the air temperature and the dewpoint temperature.
1 2 | def difference_of_air_t_dew_p( air, dew ): return air - dew |
1 2 3 | dif_temp = pd.Series(df['TL']-df['TP']) rel_hum = df['RF'] |
1 2 3 4 5 | f, ax = plt.subplots() ax.scatter(rel_hum, dif_temp,s = 0.01,color = 'purple') ax.set_xlabel('Relativ Humidity') ax.set_title('Relation between relative humidity and temperature difference') ax.set_ylabel('diference of Air Temperature and Dew Point'); |
05 - Free coding projectΒΆ
1 | import imageio.v3 as iio |
For the free coding project I choose to download information from 2 different sources. First I analyse information of Jamtal Ferner Glacier Area, and then at the end I compare this glacier mass balance with 2 other glaciers form different regions of the world.
General Information of 3 different glaciers in the worldΒΆ
Central Europe --- JAMTAL FERNER
1 2 3 4 5 6 | wgms_df = pd.read_csv('mass_balance.csv', index_col=3) JAM = wgms_df[wgms_df['NAME']=='JAMTAL F.'] annual_JAM = JAM[(JAM['LOWER_BOUND'] == 9999)] annual_JAM_winter_balance = annual_JAM.iloc [:,6] annual_JAM_balance = annual_JAM.iloc [:,10] annual_JAM_summer_balance = annual_JAM.iloc [:,8] |
FACTS: Political unit: AT WGMS ID: 480 Latitude: 46.86Β°N Longitude: 10.15Β°E Height min: 2428 m a.s.l. Height max: 3116 m a.s.l. Measurement types: MASS BALANCE, THICKNESS CHANGE & FRONT VARIATION sourse : WGMS.ch The glacier covers an area of 2.52 kmΒ² (2020) (wikipedia)
1 2 | img = iio.imread('Jamtalferner.jpg') plt.imshow(img) |
<matplotlib.image.AxesImage at 0x29a034011b0>
Central Asia --- TS.TUYUKSUYSKIY
1 2 3 4 5 6 | wgms_df = pd.read_csv('mass_balance.csv', index_col=3) TS= wgms_df[wgms_df['NAME']=='TS.TUYUKSUYSKIY'] annual_TS = TS[(TS['LOWER_BOUND'] == 9999)] annual_TS_winter_balance = annual_TS.iloc [:,6] annual_TS_balance = annual_TS.iloc [:,10] annual_TS_summer_balance = annual_TS.iloc [:,8] |
FACTS: Political unit: KZ WGMS ID: 817 Latitude: 43.05Β°N Longitude: 77.08Β°E Height min: 3478 m a.s.l. Height max: 4219m a.s.l. Measurement types: MASS BALANCE, THICKNESS CHANGE & FRONT VARIATION Length of 2.6 km and a surface area of 2.3 km2 (2013) sourse : WGMS.ch
1 2 | img = iio.imread('tuyuksu.jpg') plt.imshow(img) |
<matplotlib.image.AxesImage at 0x1f2c44caa10>
Canadian High Arctic ---Meighen Ice Cap
1 2 3 4 5 6 | wgms_df = pd.read_csv('mass_balance.csv', index_col=3) MEI = wgms_df[wgms_df['NAME']=='MEIGHEN ICE CAP'] annual_MEI = MEI[(MEI['LOWER_BOUND'] == 9999)] annual_MEI_winter_balance = annual_MEI.iloc [:,6] annual_MEI_balance = annual_MEI.iloc [:,10] annual_MEI_summer_balance = annual_MEI.iloc [:,8] |
FACTS: Political unit:CA WGMS ID: 16 Latitude: 79.95Β°N Longitude: 99.13Β°W Height min: 70 m a.s.l. Height max: 1267 m a.s.l. Measurement types: MASS BALANCE sourse : WGMS.ch ''At 62 km2 in area (2016), the Meighen Ice Cap (MIC) occupies βΌ10% of Meighen Island, Nunavut, Canada'' sourse : Meighen Ice Cap: changes in geometry, mass, and climatic response since 1959 Authors: David O. Burgess David.Burgess@nrcan-rncan.gc.ca and Bradley D. DanielsonAUTHORS INFO & Publication: Canadian Journal of Earth Sciences 15 March 2022 https://doi.org/10.1139/cjes-2021-0126
1 2 | img = iio.imread('meighenIceCap20110716-1024x683.jpg') plt.imshow(img) |
<matplotlib.image.AxesImage at 0x1feb37a0940>
Analysis in JAMTAL FERNERΒΆ
First I want to work with some data from JAMTAL, and then I will compare the 3 glaciers above mentioned. I downloaded data from this 3 glaciers from the site WGMS. Then I downloaded data from 2 Weather stations close to JAMTAL to cross some information.
The next plot is about comparing the winter- Summer- Annual mass balance in an alpine glacier, JAMTAL FERNER Glacier. As we can see in the plot, the glacier is loosing ice mass, that is why is getting smaller every year. We can read it as mm of water per m^2. I have a hole in my plot because of absence of data but any ways, I did it like this to see the oldest data that was interesting for me.
1 2 3 4 5 6 7 8 9 | #plot de jamtal sum-wint-anual x=np.linspace(1989, 2021,33) plt.plot(x,annual_JAM.iloc [:,8], label= 'sommer balance', color ='red' ) plt.plot(x,annual_JAM.iloc [:,6], label= 'winter balance', color = 'blue') plt.plot(x,annual_JAM.iloc [:,8]+annual_JAM.iloc [:,6], label= 'anual balance' , linestyle ='--' ,color='violet') plt.legend() plt.xlabel("Years") plt.ylabel("mm/m^2 of water") plt.title("Summer - Winter - Annual Ice Mass Balance, Jamtal Glacier ") |
Text(0.5, 1.0, 'Summer - Winter - Annual Ice Mass Balance, Jamtal Glacier ')
In the next line are the codes to read data from ZAMG from two different Weather Stations. One is GALTUR at 1587 m, and the other one is called BRUNNENKOGEL at 3437 m. I chose this ones because are the closest stations to JAMTAL with easy acces to the data, and I wanted to compare some information between the weather stations and the glacier data.
1 2 3 4 5 6 | GAL = pd.read_csv('GALTUR_STD_Datensatz_19890101T0000_20211231T2300.csv', index_col = 0, parse_dates=True) GAL_Y= GAL['TTX'].resample("Y").mean() GAL_D =GAL['TTX'].resample("D").mean() BRUN = pd.read_csv('BRUNNENKOGEL_STD_Datensatz_19890101T0000_20211231T2300.csv', index_col = 0, parse_dates=True) BRUN_Y= BRUN['TTX'].resample("Y").mean() BRUN_D=BRUN['TTX'].resample("D").mean() |
In the next plot, there is a comparison between the 2 stations at different heights. In the first plot are the dayli averages, and in the second, the values are the annual averages temperatures. In the plot we can see in the blue line of BRUNNENKOGEL, is very strong value that reachs the -12, but I think is a problem because in the first plot thete is a hole in the same place. but the temperature was computated so I didn't have enough time to figure it out what happens.
1 2 3 4 5 6 7 8 | plt.plot( GAL_D, label = 'Galtur, 1587 m', color='springgreen', ) plt.plot(BRUN_D, label = 'Brunnenkogel, 3437 m', color='royalblue') plt.xlabel("Years") plt.ylabel("Temperature in CΒ°") plt.title("comparison of dayli temperature of 2 stations at different heights, arround JAMTAL") plt.legend() plt.show() |
1 2 3 4 5 6 7 8 | plt.plot( GAL_Y, label = 'Galtur, 1587 m', color='springgreen', ) plt.plot(BRUN_Y, label = 'Brunnenkogel, 3437 m', color='royalblue') plt.xlabel("Years") plt.ylabel("Temperature in CΒ°") plt.title("comparison of annual temperature of 2 stations at different heights, arround JAMTAL") plt.legend() plt.show() |
In the next plot I show a windrose with data from station BRUNNENKOGEL, close to JAMTAL. I choose this station because is very high, to show how is the wind direction and speed in a place similar to the top of the JAMTAL glacier, as in ZAMG , I didnt found information in that valley. this was the closest and similiar that I found. We can see the predominant wind comes from south-west.
1 2 3 4 5 6 7 8 9 10 11 | BRU_WIN = pd.read_csv('WIND_BRUN_STD_Datensatz_19890101T0000_20211231T2300.csv', index_col = 0, parse_dates=True) BRU_WIN_D= BRU_WIN.resample("H").mean() from windrose import WindroseAxes wind_dir = BRU_WIN_D['D6X'] wind_speed = BRU_WIN_D['VVX'] ax = WindroseAxes.from_ax() ax.bar(wind_dir, wind_speed) ax.set_legend() ax.set_title('Hourly wind speed and wind direction in BRUNNENKOGEL, close to Jamtal, ') plt.show() |
In the next plot I wanted to cross data from the 2 places where I downloaded. So I choose the winter ice mass balance of JAMTAL, and the winter precipitation of the weather stations. I wanted to see if they look similar, thinking in precipitation in Galtur in the winter time, and the snow accumulation on the winter time in the glacier. but temperatures are very important for this analysis ( snow or water falling). I tried to cross the winter balance of Jamtal with the BRUNNENKOGEL winter precipitation but the data didn't work.
1 2 3 4 5 6 7 8 9 10 11 12 13 | GAL_winter = GAL[GAL.index.month.isin([12,1,2])] annual_JAM_winter_balancx = np.linspace(1989, 2021, 33) fig, axes = plt.subplots() axes.bar(x,annual_JAM_winter_balance, color='b') axes.set_xlabel('years') axes.set_ylabel('JAM Winter Balance (mm/m 2 of Water)', color='b') twin_axes = axes.twinx() twin_axes.plot(x,GAL_winter['RSX'].resample("Y").sum(), 'limegreen') twin_axes.set_ylabel('total water precipitation per winter (snow and rain) in GALTUR', color='limegreen') fig.suptitle("Winter Balance JAMTAL - WINTER Precipitation GALTUR") plt.show() |
Next, a winter balance with winter temperatures of BRUNNK. this makes a bit more sence , when the media temperature is a bit higher the mass balance is a bit lower .But again, there are other factors afecting this dynamic
1 2 3 4 5 6 7 8 9 10 11 12 13 | BRUN_winter = BRUN[BRUN.index.month.isin([12,1,2])] x = np.linspace(1989, 2021, 33) fig, axes = plt.subplots() axes.bar(x,annual_JAM_winter_balance, color='b') axes.set_xlabel('years') axes.set_ylabel('JAM Winter Balance (mm/m 2 of Water)', color='b') twin_axes = axes.twinx() twin_axes.plot(x,BRUN_winter['TTX'].resample("Y").mean(), 'limegreen') twin_axes.set_ylabel('Temperature BRUNNENKOGEL(Β°C)', color='r') fig.suptitle("Winter Balance JAMTAL-WINTER BRUNNENK Temperature (CΒ°)") plt.show() |
In the next one I crossed the annual winter Balance of Jamtal with the average annual temperature at galtur, to see if colder are related to more snow accumulation on the glacier. but again probably other factors affects such a conclusion
1 2 3 4 5 6 7 8 9 10 11 | x = np.linspace(1989, 2021, 33) fig, axes = plt.subplots() axes.bar(x,annual_JAM_winter_balance, color='b') axes.set_xlabel('years') axes.set_ylabel('JAM Winter Balance (mm/m 2 of Water)', color='b') twin_axes = axes.twinx() twin_axes.plot(x,BRUN_Y, 'r') twin_axes.set_ylabel('Temperature BRUNNENKOGEL (Β°C)', color='r') fig.suptitle("Winter Balance JAMTAL-Average Annual Temperature") plt.show() |
In the next plot I cross the Annual balance instead of the winter one and because we are talking of negative numbers the blue columns look the other way arround. So since 1989 there are no positive mass balances in Jamtal.
1 2 3 4 5 6 7 8 9 10 11 | x = np.linspace(1989, 2021, 33) fig, axes = plt.subplots() axes.bar(x,annual_JAM_balance, color='b') axes.set_xlabel('years') axes.set_ylabel('JAM Annual Balance (mm/m 2 of Water)', color='b') twin_axes = axes.twinx() twin_axes.plot(x,GAL_Y, 'deeppink') twin_axes.set_ylabel('Temperature GALTUR (Β°C)', color='deeppink') fig.suptitle("Annual Balance JAMTAL-Average Annual Temperature GALTUR") plt.show() |
In the next plot, my idea is to cross the data from the three chosen glaciers/Ice cap to show that all of them are loosing more ice mass than what they gain per year. The values are mostly all of them under 0, this means that the net 'gain and lost'ice-mass-balance tends to negatives values. Also we can see for example, with the blue line, that a glacier in a more temperate latitude like JAMTAL, with its particular characteristics (location, altitude, size) may be loosing more relative ice mass than the glacier from Canada in the artic (is lower, bigger, and in a higher latitude). The glacier form central Asia (green line, TS.TUYUKSUYSKIY) is a similar latitud like JAMTAL ( but is higher located) and losses less ice mass that the alpine one, and more ice mass than the artic one. Of course there are many other factors that describe the dinamic of each glacier but the idea was to do a comparison of diferent glaciers with different dynamics in the world, to see how look in an image, the mass balance of the 3 of them.
ANNUAL_BALANCE description Area-normalized (βspecificβ) annual balance (mm mβ»Β² w.e.)
This description means that we are computing mm of water per quadrat meter in certain period of time. when the glacier gains ice mass is a positiv number ( as accumulation of snowfalls in wintertime) and when the glacier looses ice mass (melting and sublimation in summer) is a negativ number. Hier an annual mass balances computes a summer balance and a winter balance geting the net mass balance in one year.
1 2 3 4 5 6 7 | plt.plot(annual_MEI_summer_balance ,label = 'MEIGHEN ICE CAP', color='violet', ) plt.plot(annual_JAM_summer_balance, label = 'JAMTAL', color='blue') plt.plot(annual_TS_summer_balance, label = 'TS.TUYUKSUYSKIY' , color='green') plt.xlabel("Years") plt.ylabel("mm/m^2 of water") plt.title("comparison of Annual Mass Balance of 3 Glaciers on differen Location ") plt.show() |
1 |