Programming project¶
- Names: Anna, Buchhauser, Carolin Zappini
- Matrikelnummer: 12215197, 12216030
01 - An energy balance model with hysteresis¶
The planetary albedo $\alpha$ is in fact changing with climate change. As the temperature drops, sea-ice and ice sheets are extending (increasing the albedo). Inversely, the albedo decreases as temperature rises. The planetary albedo of our simple energy balance model follows the following equation:
$$ \alpha = \begin{cases} 0.3,& \text{if } T \gt 280\\ 0.7,& \text{if } T \lt 250\\ a T + b, & \text{otherwise} \end{cases} $$
01-01: Compute the parameters $a$ and $b$ so that the equation is continuous at T=250K and T=280K.
1 | import numpy as np |
1 2 3 4 5 6 7 8 9 10 | #create array to solve the linear equation A = np.array([[250, 1],[280, 1]]) y = np.array([0.7, 0.3]) #create inverse A_inv = np.linalg.inv(A) # Check that the inverse worked np.testing.assert_allclose(A @ A_inv, np.identity(2), atol=1e-7) np.round(A @ A_inv, decimals=7) |
array([[1., 0.], [0., 1.]])
1 2 3 | #solve equation params = A_inv @ y params |
array([-0.01333333, 4.03333333])
The result of the equation is: $$-0,013*T + 4.033$$ (We rounded here, but continue the calculations with the more exact values from the array!)
01-02: Now write a function called alpha_from_temperature
which accepts a single positional parameter T
as input (a scalar) and returns the corresponding albedo. Test your function using doctests to make sure that it complies to the instructions above.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | def alpha_from_temperature(T): """Albedo depending on temperature Parameters ---------- T : float, temperature Returns ------- albedo (float) Examples -------- >>> print(alpha_from_temperature(290)) 0.3 >>> print(alpha_from_temperature(240)) 0.7 >>> print(f'{alpha_from_temperature(260):.2f}') 0.57 >>> print(alpha_from_temperature(-1)) Traceback (most recent call last): ... ValueError: Temperature must be in the valid range (>= 0K) """ if T >= 0: #checking if temperature is in the valid range if T < 250: alpha = 0.7 elif T > 280: alpha = 0.3 else: alpha = params[0]*T + params[1] else: raise ValueError("Temperature must be in the valid range (>= 0K)") return alpha |
1 2 3 | # Testing import doctest doctest.testmod() |
TestResults(failed=0, attempted=4)
01-03: Adapt the existing code from week 07 to write a function called temperature_change_with_hysteresis
which accepts t0
(the starting temperature in K), n_years
(the number of simulation years) as positional arguments and tau
(the atmosphere transmissivity) as keyword argument (default value 0.611). Verify that:
- the stabilization temperature with
t0 = 292
and default tau is approximately 288K - the stabilization temperature with
t0 = 265
and default tau is approximately 233K
At first we copied the functions from the week 7 assignment in order to calculate the absorbed shortwave radiation and the outgoing longwave radiation:
1 2 3 | def asr(alpha=0.3): s0 = 1362 return (1 - alpha) * s0 / 4 |
1 2 3 | def olr(t, tau=0.611): sigma = 5.67E-8 return sigma * tau * t**4 |
Then we copied the temperature change function and edited it, so that it works for a temperature depending alpha:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | def temperature_change_with_hysteresis(t0, n_years, tau=0.611): """Temperature change scenario after change of transmissivity. Parameters ---------- t0 : float the starting temperature (K) n_years : int the number of simulation years tau : float, optional the atmosphere transmissivity (-) alpha : float, optional the planetary albedo Returns ------- (time, temperature) : ndarrays of size n_years + 1 Examples -------- >>> y, t = temperature_change_with_hysteresis(292, 20) >>> y array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) >>> np.allclose(t[20], 288, atol=0.02) True >>> y, t = temperature_change_with_hysteresis(265, 100) >>> np.allclose(t[100], 233, atol=0.02, rtol=0.02) True """ C = 4.0e+08 dt = 60 * 60 * 24 * 365 alpha = alpha_from_temperature(t0) years = np.arange(n_years + 1) temperature = np.zeros(n_years + 1) temperature[0] = t0 for i in range(n_years): temperature[i + 1] = temperature[i] + dt / C * (asr(alpha=alpha) - olr(temperature[i], tau=tau)) alpha = alpha_from_temperature(temperature[i + 1]) return years, temperature # Testing doctest.testmod() |
TestResults(failed=0, attempted=9)
01-04: Realize a total of N simulations with starting temperatures regularly spaced between t0
=206K, and t0
=318K and plot them on a single plot for n_years
=50. The plot should look somewhat similar to this example for N=21
.
1 | import matplotlib.pyplot as plt |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | fig, ax = plt.subplots() #create array with all starting temperatures t0 = np.linspace(206, 318, 21) #create subplots for v in t0: x, yi = temperature_change_with_hysteresis(v, 50) ax.plot(x, yi, color='grey') #labels and titels ax.set_ylabel('Temperature (K)') ax.set_xlabel('Years') ax.set_title('Climate change scenarios witth hysteresis'); |
Bonus: only if you want (and if time permits), you can try to increase N and add colors to your plot to create a graph similar to this one.
1 2 | from matplotlib import colors import pandas as pd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | #create array with all starting temperatures t0 = np.linspace(206, 318, 200) #create DataFrame with different tamperature changes df = pd.DataFrame(index=x) for v in t0: x, yi = temperature_change_with_hysteresis(v, 50) df[v]= yi df.plot(legend=False, cmap='autumn') #plot dataframe #labels and titels plt.ylabel('Temperature (K)') plt.xlabel('Years') plt.title('Climate change scenarios witth hysteresis'); |
C:\Users\annab\AppData\Local\Temp\ipykernel_10156\3275947264.py:9: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()` df[v]= yi
02 - Weather station data files¶
1 2 3 | import pandas as pd import numpy as np import matplotlib.pyplot as plt |
1 2 3 | #read data df = pd.read_csv('INNSBRUCK-FLUGPLATZ_Datensatz_20150101_20211231.csv', index_col=1, parse_dates=True) df = df.drop('station', axis=1) |
02-01: after reading the documentation of the respective functions (and maybe try a few things yourself), explain in plain sentences:
- what am I asking pandas to do with the
index_col=1, parse_dates=True
keyword arguments? Why am I doing this? - what am I asking pandas to do with
.drop()
? Whyaxis=1
?
index_col: with this parameter we can tell pandas which column to use as the index of our DataFrame; if we set index_col to be false, pandas won't use any column as index.
parse_dates: we can see that if parse_date is set to true, pandas tries to turn things into real datetime types, in this case the index column. On the other hand, if it is set to false, the index column is simply a dtype object.
drop: with the drop method, we can erase a column or a row depending on which value we choose for the keyword argument "axis". In the first positional argument we need to state the name of what column/row we want to cancel; if we set axis=0 we will cancel a row, if we set it to 1 we will cancel a column. In this case the column named 'stations' has been erased.
02-02: again, explain in plain sentences what the dfmeta.loc[df.columns]
is doing, and why it works that way.
1 2 3 | #read data dfmeta = pd.read_csv('ZEHNMIN Parameter-Metadaten.csv', index_col=0) dfmeta.loc[df.columns] |
Kurzbeschreibung | Beschreibung | Einheit | |
---|---|---|---|
DD | Windrichtung | Windrichtung, vektorielles Mittel über 10 Minuten | ° |
FF | vektorielle Windgeschwindigkeit | Windgeschwindigkeit, vektorielles Mittel über ... | m/s |
GSX | Globalstrahlung | Globalstrahlung, arithmetisches Mittel über 10... | W/m² |
P | Luftdruck | Luftdruck, Basiswert zur Minute10 | hPa |
RF | Relative Feuchte | Relative Luftfeuchte, Basiswert zur Minute10 | % |
RR | Niederschlag | 10 Minuten Summe des Niederschlags, Summe der ... | mm |
SO | Sonnenscheindauer | Sonnenscheindauer, Sekundensumme über 10 Minuten | s |
TB1 | Erdbodentemperatur in 10cm Tiefe | Erdbodentemperatur in 10cm Tiefe, Basiswert zu... | °C |
TB2 | Erdbodentemperatur in 20cm Tiefe | Erdbodentemperatur in 20cm Tiefe, Basiswert zu... | °C |
TB3 | Erdbodentemperatur in 50cm Tiefe | Erdbodentemperatur in 50cm Tiefe, Basiswert zu... | °C |
TL | Lufttemperatur in 2m | Lufttemperatur in 2m Höhe, Basiswert zur Minute10 | °C |
TP | Taupunkt | Taupunktstemperatur, Basiswert zur Minute10 | °C |
- dfmeta.loc[df.columns] searches for rows in the dfmeta file which they appear as columns in the df file. This way it filters dfmeta for the values whe need, otherwise it would be much longer. With df.colums we get the label of each column in the df dataframe, which corresponds to the row-label of the dfmeta DataFrame. The dfmeta.loc method keeps only these labels in the second dataframe.
02-03: Explore the dfh
dataframe. Explain, in plain words, what the purpose of .resample('H')
followed by mean()
is. Explain what .resample('H').max()
and .resample('H').sum()
would do.
1 | dfh = df.resample('H').mean() |
- The .resample() method can be used to summarize our data by another time frame; in this specific case the data of every hour ('H') is summarized and then the .mean() method calculates the mean value of each summary. Following this logic, .resample('H').max() would give the highest measurment taken every hour for each variable, and .resample('H').sum() would sum up the hourly measurments for each column.
02-04: Using np.allclose
, make sure that the average of the first hour (that you'll compute yourself from df
) is indeed equal to the first row of dfh
. Now, two variables in the dataframe have units that aren't suitable for averaging. Please convert the following variables to the correct units:
RR
needs to be converted from the average of 10 min sums to mm/hSO
needs to be converted from the average of 10 min sums to s/h
1 2 3 4 5 6 7 | #checking if the average is correct av = dict() for v in df.columns: mean_val = df[v][:6].mean() av[v] = np.allclose(dfh[v][0], mean_val) av |
{'DD': True, 'FF': True, 'GSX': True, 'P': True, 'RF': True, 'RR': True, 'SO': True, 'TB1': True, 'TB2': True, 'TB3': True, 'TL': True, 'TP': True}
1 2 3 4 5 | #converting RR dfh["RR"] = df["RR"].resample('H').sum() #converting SO dfh["SO"] = df["SO"].resample('H').sum() |
Spend some time exploring the dfh
dataframe we just created. What time period does it cover? What variables does it contain?
1 2 3 4 5 6 7 | #exploring the dataframe #time period print(f'Time period: {len(dfh.index)} hours \n') #Datatypes print(f'Datatypes:') dfh.dtypes |
Time period: 61368 hours Datatypes:
DD float64 FF float64 GSX float64 P float64 RF float64 RR float64 SO float64 TB1 float64 TB2 float64 TB3 float64 TL float64 TP float64 dtype: object
03 - Precipitation¶
03-01: Compute the average annual precipitation (m/year) over the 7-year period.
1 | dfh["RR"].resample('y').sum().mean() / 1000 |
0.9205571428571429
03-02: What is the smallest non-zero precipitation measured at the station? What is the maximum hourly precipitation measured at the station? When did this occur?
1 2 | print(f'The smallest non-zero precipitation measured was {dfh[dfh["RR"] > 0]["RR"].min()}') print(f'The maximum hourly precipitation was of {dfh["RR"].max():.2f} and it happened on {dfh[dfh["RR"]==dfh["RR"].max()].index[0]}') |
The smallest non-zero precipitation measured was 0.1 The maximum hourly precipitation was of 22.20 and it happened on 2021-09-16 22:00:00
03-03: Plot a histogram of hourly precipitation, with bins of size 0.2 mm/h, starting at 0.1 mm/h and ending at 25 mm/h. Plot the same data, but this time with a logarithmic y-axis. Compute the 99th percentile (or quantile) of hourly precipitation.
1 2 3 4 5 6 | #histogram with linear y-axis ax = dfh["RR"].plot.hist(by=None, bins=120, xlim=(0.1,25), ylim=(0.1,2000)) ax.set_xlabel('Hourly Precipitation in mm/h') ax.set_ylabel('Frequency') ax.set_title('Histogram of hourly precipitation'); |
1 2 3 4 5 6 | #histogram with logarithmic y-axis ax = dfh["RR"].plot.hist(by=None, bins=125, xlim=(0.1,25), ylim=(0.1,2000), logy=True) ax.set_xlabel('Hourly Precipitation in mm/h') ax.set_ylabel('Frequency') ax.set_title('Histogram of hourly precipitation'); |
1 | print(f'The 99th percentile of the hourly precipitation is {dfh.RR.quantile(0.99):.2f} mm.') |
The 99th percentile of the hourly precipitation is 2.33 mm.
03-04: Compute daily sums (mm/d) of precipitation (tip: use .resample
again). Compute the average number or rain days per year in Innsbruck (a "rain day" is a day with at least 0.1 mm / d of measured precipitation).
1 2 3 | daily_rr = dfh["RR"].resample('d').sum() rain_days = daily_rr[daily_rr > 0] print(f'The average number of rainy days per year in Innsbruck is: {len(rain_days)/6:.0f}') #divided by the number of years |
The average number of rainy days per year in Innsbruck is: 199
03-05: Now select (subset) the daily dataframe to keep only only daily data in the months of December, January, February (DFJ). To do this, note that dfh.index.month
exists and can be used to subset the data efficiently. Compute the average precipitation in DJF (mm / d), and the average number of rainy days in DJF. Repeat with the months of June, July, August (JJA).
1 2 3 4 5 | #calculaing dfj dfj = daily_rr[(daily_rr.index.month < 3) | (daily_rr.index.month > 11)] rain_days_dfj = dfj[dfj > 0] print(f'The average precipitation per day in the months of December, January and February is {dfj.mean():.2f} mm/d \ and the average number of rainy days is {len(rain_days_dfj)/6:.0f}.') |
The average precipitation per day in the months of December, January and February is 1.77 mm/d and the average number of rainy days is 44.
1 2 3 4 5 | #calculaing jja jja = daily_rr[(daily_rr.index.month < 9) & (daily_rr.index.month > 5)] rain_days_jja = jja[jja > 0] print(f'The average precipitation per day in the months of June, July and August is {jja.mean():.2f} mm/d \ and the average number of rainy days is {len(rain_days_jja)/6:.0f}.') |
The average precipitation per day in the months of June, July and August is 4.02 mm/d and the average number of rainy days is 62.
03-06 Repeat the DJF and JJA subsetting, but this time with hourly data. Count the total number of times that hourly precipitation in DJF is above the 99th percentile computed in exercise 03-03. Repeat with JJA
1 2 3 4 | # calculaing dfj djf_hours = dfh['RR'][(dfh['RR'].index.month < 3) | (dfh['RR'].index.month > 11)] times_jja = djf_hours[djf_hours > dfh.RR.quantile(0.99)].count() print(f'The hourly precipitation in the months of December, January an February has been {times_jja} times above the 99th percentile.') |
The hourly precipitation in the months of December, January an February has been 68 times above the 99th percentile.
1 2 3 4 | # calculaing dfj jja_hours = dfh['RR'][(dfh['RR'].index.month > 5) & (dfh['RR'].index.month < 9)] times_jja = jja_hours[jja_hours > dfh.RR.quantile(0.99)].count() print(f'The hourly precipitation in the months of June, July and August has been {times_jja} times above the 99th percentile') |
The hourly precipitation in the months of June, July and August has been 308 times above the 99th percentile
03-07: Compute and plot the average daily cycle of hourly precipitation in DFJ and JJA. I expect a plot similar to this example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #Compute the daily average daily_cycle_djf = djf_hours.groupby(djf_hours.index.hour).mean() daily_cycle_jja = jja_hours.groupby(jja_hours.index.hour).mean() #plot data fig, ax = plt.subplots() x = np.arange(0,24) y1 = daily_cycle_djf y2 = daily_cycle_jja ax.plot(x,y1, color='yellowgreen', label='DJF' ) ax.plot(x,y2, color='violet', label='JJA') ax.set_xlabel('Hour of day (UTC)') ax.set_ylabel('Hourly precipitation (mm/h)') ax.set_title('Precipitation daily cycle Innsbruck 2015-2021'); plt.legend(); |
04 - A few other variables¶
04-01: Verify that the three soil temperatures have approximately the same average value over the entire period. Now plot the three soil temperature timeseries in hourly resolution over the course of the year of 2020 (example). Repeat the plot with the month of may 2020.
1 2 | #Verifing the three temperatures have approximately the same average np.allclose(dfh['TB1'].mean(), dfh['TB2'].mean(), dfh['TB3'].mean(), atol=0.2) |
True
1 2 3 4 5 6 7 8 9 10 11 12 13 | #plot in hourly resolution over the course of the year of 2020 #create new dataframe for 2020 dfh2020 = dfh.filter(like = '2020', axis=0) #plot data dfh2020['TB1'].plot(label='TB1' ) dfh2020['TB2'].plot(color='crimson', label='TB2') dfh2020['TB3'].plot(label='TB3') plt.xlabel('time') plt.ylabel('soil temperature in °C') plt.title('comparison soil temperature in 2020'); plt.legend(); |
1 2 3 4 5 6 7 8 9 10 11 12 13 | #plot in hourly resolution over the course of may 2020 #create new dataframe for may 2020 dfh2020_may = dfh2020.filter(like = '2020-05', axis=0) #plot data dfh2020_may['TB1'].plot(label='TB1' ) dfh2020_may['TB2'].plot(color='crimson', label='TB2') dfh2020_may['TB3'].plot(label='TB3') plt.xlabel('time') plt.ylabel('soil temperature in °C') plt.title('comparison soil temperature in May 2020'); plt.legend(); |
04-02: Plot the average daily cycle of all three soil temperatures.
1 2 3 4 5 6 7 8 9 10 11 | dfd = dfh.groupby(dfh.index.hour).mean() #plot data dfd['TB1'].plot(label='TB1' ) dfd['TB2'].plot(color='crimson', label='TB2') dfd['TB3'].plot(label='TB3') plt.xlabel('time') plt.ylabel('soil temperature in °C') plt.title('average daily cycle of the three soil temperatures'); plt.legend(); |
04-03: Compute the difference (in °C) between the air temperature and the dewpoint temperature. Now plot this difference on a scatter plot (x-axis: relative humidity, y-axis: temperature difference).
1 2 3 4 5 6 7 8 | #create new column with the temperature difference dfh['Temp_diff'] = dfh['TL'] - dfh['TP'] dfh.plot.scatter(x='RF', y='Temp_diff', figsize=(10,5)) plt.xlabel('relative humidity in %') plt.ylabel('temperature difference in °C') plt.title('difference between air temperature and dewpoint temperature'); |
05 - Free coding project¶
Thermodynamic analysis over Innsbruck¶
For the free coding project we originally thougth to work with the snow data of some high stations around Innsbruck, but that did not work well due to missing information, so we have decided to do some thermodynamical analysis from the station of the University of Innsbruck over the last five years. We got the measurments from the ZAMG database, more precisely we downloaded the data from the "Messstationen Zehnminutendaten" section. We focused on: temperature, dew point temperature, pressure, wind velocity, wind direction and sunshine duration.
Firstly we downloaded the data and converted it into a data frame as already done above.
1 2 3 | data = pd.read_csv('ZEHNMIN Datensatz UNI_20180101T0000_20230605T2350.csv', index_col=0, parse_dates=True) datah = data.resample('d').mean() datah['SO'] = data['SO'].resample('d').sum()/3600 #we divided it by 3600 in order to have the sunshine duration pro hour |
First plot: Relationship between temperature and dew point temperature¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | fig, ax = plt.subplots() x = datah.index y1 = datah['TP'] y2 = datah['TL'] ax.plot(x, y1, label='Dewpoint temperature') ax.plot(x, y2, color='C1', label='air temperature'); ax.legend(loc='lower right') # Labels and titles ax.set_xlabel('Time') ax.set_ylabel('Air temperature (°C)') ax.set_title('Comparison between temperature and dew point temperature'); |
We can clearly see that there is only a small deviation between the actual temperature and the dew point temperature, so if the temperature increases, the dew point increases as well. As we would expect, temperature is always higher than the dew point temperature.
Second plot: Relationship between temperature and sunshine duration¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 8), sharex=True) x = datah.index y1 = datah['TL'] y2 = datah['SO'] ax1.plot(x, y1, color='coral', label='Dewpoint temperature') ax2.bar(x, y2, color='gold', label='sunshine'); # Labels and titles ax2.set_xlabel('Time') ax1.set_ylabel('Air temperature (°C)') ax2.set_ylabel('sunshine duration in hours') ax1.set_title('Relationship between temperature and sunshine duration'); |
From these graphs we can see that there is a correlation between temperature and duration of the day: the days last longer in summer when the temperature is higher; as we would naturally assume, almost the same pattern occurs each year.
Third plot: Correlation between saturation vapor pressure and temperature¶
Now we want to calculate the saturation vapor pressure for a given value of temperature. We create a new column of the data frame using the following formula: $$e_s = 6.112 exp(\frac{17.67T}{T+243.5})$$
1 | datah['ES'] = 6.112*np.exp(17.67*datah['TL']/(datah['TL'] + 243.5)) |
1 2 3 4 | datah.plot.scatter(x='TL', y='ES', color= 'orchid', figsize=(10,5)) plt.xlabel('Air temperature (°C)') plt.ylabel('Saturation vapor pressure (hPa)') plt.title('Saturation vapor pressure as a function of temperature'); |
From this plot it is easy to see the exponential relationship between saturation vapor pressure and temperature.
Forth plot: Air density with height¶
For this section we want to see how density changed at different heights over the last year, so we will take into consideration the University of Innsbruck station (578 m) and the Patscherkofel station (2247 m). We downloaded the necessary 2022 data of the Patscherkofel station from the same webpage as above. Then we created a new column for each dataframe containing the air density.
From the ideal gas law, we know that: $$p = \rho RT$$ In order to simplify our calculations, we will use the specific gas constant for dry air, although we have seen above that there is also moist content.
1 2 3 | #create new column for university innsbruck datah['RHO'] = datah['P']*100/(287.05*(datah['TL']+273.15)) datah2022 = datah.filter(like = '2022', axis=0) |
1 2 3 | #read data from station patscherkofel data_patsch = pd.read_csv('ZEHNMIN Datensatz pascherkofl_20220101T0000_20221231T2350.csv', index_col=0, parse_dates=True) data_patschd = data_patsch.resample('d').mean() |
1 2 | #create new column for patscherkofel data_patschd['RHO'] = data_patschd['P']*100/(287.05*(data_patschd['TL']+273.15)) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | #plot both densities fig, ax = plt.subplots() x = datah2022.index y1 = datah2022['RHO'] y2 = data_patschd['RHO'] ax.plot(x, y1, color='coral', label='air density Innsbruck') ax.plot(x, y2, color='green', label='air density Patscherkofel') # Labels and titles ax.legend(loc='upper center') ax.set_xlabel('Time') ax.set_ylabel('air density (kg/m$^{3}$)') ax.set_title('comparison air density'); |
As expected there is a neat difference in density between the two stations, which is due to the pressure profile of the atmosphere that decreases also with height.
Fith plot: Windrose of Innsbruck¶
We now want to plot a windrose of the wind profile in Innsbruck to display wind speed and direction over the last five years.
1 | from windrose import WindroseAxes |
1 2 3 | ax = WindroseAxes.from_ax() ax.bar(datah['DD'], datah['FF'], normed=True, opening=0.8, edgecolor='white') ax.set_legend(); |
As we experience everyday, we see from the windrose that the strongest wind comes from the North (Nordkette) and blows southwoards.