Monday, December 27, 2010

This Friday's trough and January 7th, 2008

After a few days off to celebrate the holidays, it's once again time for me to look at the weather.

Of course, in my days off, a powerful nor'easter swamped the east coast with rain and heavy snow.  Blizzard conditions were experienced in New York City.  It was quite the event.

But I kind of missed that.  So, instead I'm turning to the potent trough that models are indicating should move through the country late this week.

What trough is this?  Here's the 96 hour GFS 500 mb height forecast for Friday morning--
Fig 1 -- 96 hour GFS forecast of 500 mb geopotential heights and winds for 12Z, Friday, Dec. 31, 2010. From the HOOT website.
Now that's a very deep trough for this time of year--the minimum close contour is at 5280 meters and is all the way down over Kansas.  Based on this cyclonically-curved jet streak pattern, one might imagine a deepening surface low pressure center somewhere near Kansas City, Missouri, with a trailing cold front back through central Kansas and western Oklahoma (underneath the jet streak).  And, based on the GFS surface forecast for that time, you'd be pretty accurate with that imagination--
Fig 2 -- 96 hour GFS forecast of surface temperature (shaded), sea-level pressure (contoured) and winds (barbs) for  12Z, Friday, Dec. 31, 2010. From the HOOT website.
 We can also see a warm front in the above image extending northeastward from the surface low pressure center through eastern Iowa and into southern Wisconsin.  Superficially, this reminds me of the surface synoptic-scale setup from the first week of January in 2008:
Fig 3 -- HPC surface analysis from 15Z, January 7, 2008.  From the SPC Severe Thunderstorm Event Archive.
At this time, there was also a surface low near Kansas City with a warm front extending into southern Wisconsin and a cold front trailing into western Oklahoma.  With this particular event, several tornadoes were reported later that day in the lower Mississippi valley in addition to strong EF3 tornadoes that moved across northern Illinois and southern Wisconsin--on January 7th.  Not what you'd expect from the weather at this time of year.

However, there are some noticeable differences between these two events (or rather, from this theoretical GFS forecast and this event two years ago).  First, the surface temperatures are rather different.  In the GFS forecast, northern Illinois is only in the upper 30s in the morning, where as in the map from January 7, 2008, the dewpoints (and consequently low temperatures) were in the 50s--so not only is it much cooler, there also doesn't seem to be as much moisture.  In fact, looking at the GFS dewpoint forecast for the same time:
Fig 4 -- 96 hour GFS forecast for dewpoint temperature (shaded) and winds at 12Z, Dec. 27, 2010.  From the HOOT website.
We see that dewpoints are only in the 40s (which is still high for this time of year, but not as high as it was in 2008) in much of Missouri, Oklahoma and Arkansas with the really deep moisture confined to the far south and gulf coast.

However, in looking at the 500mb chart from 12Z, January 7th, 2008, we see more differences:
Fig 5 -- Objective analysis of 500mb heights, winds, and temperatures from 12Z, January 7th, 2008. From the SPC Severe Thunderstorm Event Archive.
It's immediately obvious that that the 500mb pattern looks rather different.  There is no cutoff contour of low heights like we are seeing in the GFS forecast.  Winds are more out of the southwest over the warm frontal zone in Iowa and southern Wisconsin as opposed to the more southerly winds being forecast in the same area by the GFS for the end of this week.  This may seem like a small difference, but changes in directional wind shear can have a huge impact on the ability of any storms that form to rotate.

We can also look at the upper air temperatures to get a hint at possible instability.  In figure 5 above, on January 7th, 2008, the 500mb temperatures over the surface baroclinic zone (over the fronts) ranged from the -10 to -15 degree range in the lower Mississippi valley to around -20 degrees Celsius over northern Illinois and southern Wisconsin.  Looking at the GFS 500mb temperatures forecast for this Friday shows surprisingly similar numbers:
Fig 6 -- 96 hour GFS forecast of 500mb temperatures (shaded), geopotential height (contoured) and winds for 12Z, Friday, Dec. 31, 2010.  From the HOOT website.
Those temperatures line up rather well.  This would theoretically say good things about the potential instability of the atmosphere, as colder temperatures aloft tend to imply steeper lapse rates.  However, remember that our surface temperatures are around 15 degrees cooler or more with significantly less moisture in this forecast than it was on January 7th, 2008.  That does not make it a good environment for being able to get surface-based convection.  Forecast soundings agree with this, still showing a deep isothermal layer (which is a stable layer) near the surface even after all the heating that will go on during the day on Friday.
Fig 7 -- BUFKIT GFS 108-hour forecast sounding for KRFD for 00Z, Jan. 1, 2010.
 We can see that we're saturated well up to 700 mb, which does imply a good rain event.  Wind directional shear actually looks all right near the surface.  But that isothermal layer near the surface and lapse rates that in general really aren't that impressive would seem to inhibit any good, deep convection from going on.  But we'll see...

Anyhow, just thought I'd do a quick run down comparing this long-term forecast to the January 7th event after noting the striking similarities in the surface patterns.  However, as we can see here, just because the conditions look similar from being viewed one way doesn't mean that they are similar in other ways.  We'll still see unusually warm temperatures throughout much of the middle of the country and probably some thunderstorms somewhere this Friday.  But at this point it doesn't look like quite the severe weather event we had two years ago.

Things could change in the models, though.  And they most definitely will.  96-hour forecasts are still way, way, way out there and there's no doubt that that GFS forecast will change in the days to come.  Already there's disagreement with the ECMWF model.  Here's the ECMWF 500mb chart for the same 96-hour forecast:
Fig 8 -- 96 hour ECMWF 500mb forecast winds (shaded) and geopotential heights (contoured) for 12Z, Dec. 31, 2010.  From the HOOT website.
The ECMWF model is slightly slower than the GFS with this trough--note that the main jet streak is over central and western Kansas instead of eastern Kansas.  It also brings the jet streak further north--now we'd expect to find a surface low somewhere near Sioux Falls, South Dakota, with a trailing cold front over western Kansas and the panhandles of Oklahoma and Texas.  The trough itself is also deeper--note the minimum closed contour is still 5280 meter (like it was in figure 1 above) but this closed contour extends over a much, much larger area.  So there are definite and distinct differences.

We'll just have to wait and see how this one all comes together.

Thursday, December 23, 2010

Finally...how do we calculate snow ratios?

A while ago I said that I would look into how we calculate snow ratios, as these have big impacts on determining the amount of snowfall to expect.  Today I'm finally getting around to looking at that particular subject....

First, what is a snow ratio?  Technically, it's the ratio of inches of snow on the ground to the liquid water content that the snow contains.  For example, if you had ten inches of snow on the ground, then all of it melted and you were left with one inch of water, that would be a 10:1 snow ratio.  Once the snow falls, it's easy to measure snow ratio (once you correct for the surface area from which you gathered the snow).  However, it's difficult to develop a purely physical basis for calculating snow ratios.  Most of our current knowledge on the subject comes from empirical test results.  The basic pattern of thinking behind snow ratios is this:

  1. How deep the snow on the ground becomes depends on the shape and size of the ice crystals in it.  Big, fluffy snowflake crystals (called "dendrites") tend to stack up deeper than small, compressed ice needles or plates.  It's simply a factor of their geometry.
  2. It has long been established that different types of snow crystals form at different temperatures.  Therefore it should be possible to relate snow crystal type to the temperature at which it forms and consequently relate the snow ratio to the temperature at which the snow crystals are forming.
With this in mind, lots of empirical studies were done and the result was a chart that looks like this:
Fig 1-  Empirical graph relating snow ratios to temperature of snow crystal source region.  Graph from a powerpoint presentation by Daniel Cobb.
This handy little graph forms the basis for almost all of our snowfall forecasting techniques.  A few things to note from this graph:
  1. If we take the average of all those snow ratios, it works out to be around 10:1.  This is why we typically assume a 10:1 snow ratio when we want to make quick calculations--it splits the difference and shouldn't be that far off...usually.
  2. Note that the peak of the graph is between -12 degrees Celsius and -18 degrees Celsius.  Not coincidentally, this is the region of the most rapid dendrite snow crystal growth (also known as the "dendritic growth zone").  Remember that big, fluffy dendrites tend to make deeper snow for a given water content.  Therefore it's no coincidence that the temperatures that coincide with the highest snow ratios are also the temperatures where dendrite growth is the strongest.
Just for a refresher, here's a diagram I randomly found on the web (actually, according to the website I took it from, adapted from a 1954 book by Ukichiro Nakaya called Snow Crystals, Natural and Artificial) that relates the crystal type to the temperature of formation.
Fig 2 -- Diagram relating the snow crystal type to the temperature of formation.  Adapted from Snow Crystals, Natural and Artificial by Ukichiro Nakaya (1954).  Found on this website.
And once again we see above that dendrites are favored in the temperature region of maximum snow ratios.  Why is snow crystal growth so rapid in this temperature region?  As seen in the line on the graph above, this temperature region is also where the saturation vapor pressure with respect to ice is the most different (lower) than the saturation vapor pressure with respect to liquid water.  This means that ice crystals will form preferentially to liquid water droplets, greatly speeding ice crystal growth.

So we see that the snow ratio graph above (in figure 1) makes a lot of sense.  The next question is--what temperature are we going to assume for the snow ratio?  There are two main methods for looking at this (if we're only looking at temperature with no other information):
  1. One basic way is to just use the temperature at the surface as the temperature to look up the snow ratio on the chart above.  Since surface temperature is well-represented in both models and observations, it's commonly available and can perhaps provide a more refined measurement than simply guessing at a 10:1 ratio.  However, it's clear that most of our snow crystals are not forming right before the surface--they're forming much higher.  I suppose if there's a deep isothermal layer from the surface to, say, 900mb, perhaps this might be better.  But in general, this won't be very accurate.
  2. Another temperature that can be used is the maximum temperature in the profile.  The thought is that any ice crystals forming above the maximum temperature will have to fall through the zone of maximum temperature and this might cause some melting or changing of crystal habit.  However, below that level, the ice crystals are "frozen" in what ever state they left the warmest layer, and therefore whatever reaches the surface should reflect the properties of that warmest layer.  This provides a distinct improvement over just using the surface temperature, as this temperature is probably closer to the temperature at which snow would form.  Of course, sometimes the surface temperature IS the max temperature in the profile, so then both temperatures would be the same.
Here's an example of a model snow sounding annotated with both of these temperatures.
Fig 3 -- GFS 24-hour forecast sounding for KRFD at 12Z, Dec. 24, 2010. 
In the above sounding, note that we are saturated or nearly saturated up to around 500 mb--lots of moisture there.  We're also saturated between -12 to -18 degrees Celsius (where the temperature profile is highlighted in yellow), so there are probably dendrites forming.  However, the max-temp-in-profile theory assumes that the crystals will have properties as if they formed around -5.4 degrees Celsius.  From our chart above, this is about a 9:1 snow ratio.  The surface temperature isn't that much colder at -6.2 degrees Celsius.  This results in a snow ratio of about the same--9:1 or 10:1.

However, this doesn't necessarily reflect everything that's going on in the atmosphere.  It has been shown in many studies that areas of intense snow crystal growth and formation can be collocated with the areas of maximum vertical velocity (when the air is saturated).  If there is a lot of vertical motion within a saturated environment, lots of moisture is moving through that particular region and as a result lots of snow crystals can grow there.  This has given rise to the well-known "cross-hairs" technique for looking for areas of particularly heavy snowfall.  (Still looking for a good graphic that shows that clearly...).  If we are saturated, areas of greater upward vertical velocity tend to produce more snow crystals than areas with weaker upward vertical velocity.

This can be applied to give two more methods for getting snow ratios:

One way to choose the temperature at which most of our snow crystals are forming is to find the maximum vertical velocity within the saturated parts of the sounding.  Then, we find the temperature at that level.  Since snow crystal formation is enhanced in areas of high vertical velocity, we'll assume that we'll see more crystals from this particular level than any other level.  Therefore the temperature at this level should define the geometry of a good portion of our snow crystals.
Fig 4 -- 25 hour NAM forecast for KRFD at 13Z, Dec. 24, 2010.  Vertical velocity is shown in white with "zero" vertical velocity shown as the white vertical dotted line.
In the example profile above (from about the same time as the GFS model image in figure 3--the NAM has better vertical velocities) we see that the peak vertical velocity occurs at around 828mb.  We are near saturation (particularly with respect to ice) and the temperature is -8 degrees Celsius at this level.  Using our chart above, this would translate to a snow ratio of about 10:1.

Of course, this is just choosing one level and assuming that the majority of our snow crystals will come from this level of maximum upward vertical velocity.  But we know that snow crystals are forming above and below that layer.  We also know that since we're below freezing in our profile, there's a very good chance that all the crystals are managing to fall through somehow.  So can we improve upon our forecast of snow ratio even more by taking into account snow crystals forming at other levels?

Daniel Cobb, when he worked at the Caribou WFO of the National Weather Service, developed an algorithm to do just that based on model data. This algorithm, now commonly called the COBB algorithm or the Caribou method, calculates a weighted average of snow ratios at all levels of model output where snow crystals could form (i.e., it's cold enough and the air is saturated, etc...).  The weighting depends on the vertical velocity--you sum all of the vertical velocities at each level of the model where there could be snow crystal growth and then divide each level's vertical velocity by that total to get the "weight" to apply to the snow ratio at that level.  In this way, the "crosshairs" technique described above can be expanded to all levels where snow crystals can be growing--including those layers where it's saturated in the dendritic growth zone.  As such, this technique (a form of which is used in Bufkit's "zone omega" snow ratio option) usually gives higher estimates of the snow ratio than the other techniques.  But often, it can be more accurate...

So there we have a summary of several techniques that can be used to obtain snow ratios.  All are based upon that critical graph in figure 1--that temperature-snow-ratio relation is the key to our snowfall forecasting.  To find a forecast snow depth?  Simply multiply the quantitative liquid precipitation forecast by the snow ratio.  For instance, if the model is telling you that .25 in. of liquid precipitation is expected and you have a snow ratio calculated at 10:1, to find the snow depth take .25 x 10 = 2.5 inches of snow.  It's that simple.

There are lots of excellent resources online for learning more about snow ratios.  For example, Daniel Cobb has a powerpoint presentation where he very nicely outlines the basic COBB algorithm--it can be found at:
http://cstar.cestm.albany.edu/nrow/NROW6/Cobb.ppt

The Warning Decision Training Branch also has a good teaching presentation as part of their AWOC winter weather course on snow ratios.  It goes into far more detail than I do here.This presentation is available at:
http://www.wdtb.noaa.gov/courses/winterawoc/IC6/lesson5/part1/player.html

Of course, in closing, many of our advanced models (like WRF) can forecast snow amount explicitly if they have a microphysics scheme that computes ice crystal concentration.  So in the future we may have models explicitly try to determine snowfall amounts and use those instead of snow ratios.  But I still think snow ratios are rather fun...
Fig 5 -- Calculated 24-hour snowfall accumulations from the UW-WRF model at 12Z, Dec. 24, 2010. 

Wednesday, December 22, 2010

Cold air on IR in Canada

As of yesterday it's now officially the season of winter.  Thanks to the delayed response in our climate system, the coldest time of the year comes after the day with the least solar radiation (the winter solstice). So it's on with the cold weather.

Of course, when associating "winter" and "Canada" we naturally assume things are bitterly cold.  How cold is it?  Let's look at this infrared satellite image from this morning:
Fig 1 -- GOES-E infrared satellite image from 1745Z, Dec. 22, 2010.  From the HOOT website.
Normally we see cloud tops in the green-yellow-red range of colors on these particular infrared images.  According to the color bar, this corresponds to temperatures in the -20 to -70 degree Celsius range.  Infrared satellite works by measuring the longwave radiation emitted from the Earth's atmosphere.  It turns out that objects radiate away energy in proportion to how hot or cold they are.  The warmer the object, the more intense the radiation it emits.  By measuring how strong the longwave emissions area, we can infer the temperature at which they were emitted.  Since clouds are higher than the ground and temperature (usually!) decreases with height, they tend to be colder than the ground and emit less-energetic radiation than the ground itself.  Thus we can separate "cold" returns from clouds and "warm" returns from the ground. (It also helps that clouds are moving...)

But note in the above image how most of Canada is also in the green and yellow color range.  In looking at the loop of satellite images, we can see that these areas aren't really moving.  Is all of Canada stuck under a very cold stratus deck?  Not at all!  That's actually the radiation coming from the surface.  The surface is so cold that it's as cold as a lot of the clouds we're seeing in association with storms around the edges of the continent.  Looking at the color bar again, we see that those colors should correspond to temperatures in the -20 to -30 degrees Celsius range at the surface.  Is this accurate?
Fig 2 -- "Current Conditions" around Canada at around 18Z, Dec. 22 2010.  From the Environment Canada website.
Environment Canada's "current conditions" doesn't give a specific time, but I pulled this image around 18Z today so I assume if's from sometime around then.  Note the surface observations throughout western Canada are in the -10 to -20 degree Celsius range (the fact that Canada uses metric temperature units makes these comparisons so much easier...).  So the IR satellite image is close, though it does seem to be a few degrees too cold.  Regardless, that's pretty cold air.

Interestingly enough, though, this cold air at the surface does not translate to "cold" air aloft.  Here's the latest hemispheric analysis:
Fig 3 -- Hemispheric analysis of MSLP (contoured) and 1000-500mb thickness (shaded) at 12Z, Dec. 22, 2010.  From the HOOT website.
There's a tongue of relatively large thickness values stretching all the way from the central plains up to northern Greenland.  Now, thickness is directly proportional to the mean temperature in the layer (as opposed to geopotential heights which are more loosely connected to temperature) so we can infer that there are relatively warmer temperatures aloft in that region.  At least, warmer than in the two large troughs sitting off of the northwest and northeast coasts.

But this isn't totally unexpected.  We are under a very broad ridge in the central part of the country and you can see that translating to a very sprawling surface high pressure center across much of the plains on the image above.  Ridging aloft with high pressure below indicates large-scale subsidence. However, initially to have a ridge there had to be higher heights which (roughly, as stated above) correspond to "warmer" air since warmer air tends to occupy a greater volume than colder air, vertically expanding the troposphere and lifting heights.  Furthermore, the large scale subsidence inhibits cloud formation, and with snow-covered ground free to radiate to the open sky (particularly at night), the ground will cool off rapidly.  This already stable situation then becomes even more stable as we see colder air near the surface and "warmer" air aloft--a stable temperature profile.  All this makes for some pretty quiet conditions (and cold temperatures at the surface!).

There's a lot of hand waving in that argument above.  The details regarding the warm temperatures aloft changing heights could be debated in terms of quasi-geostrophic theory, but I'm not going to go into that now.  I just wanted to point out the cold temperatures visible in the IR image and how it translated to clear skies and cold temperatures for much of the middle of the continent.  

Monday, December 20, 2010

A low with no support

Looking at today's national radar mosaic, we can see that heavy rains are continuing in southern California.  However, what's new is this big swath of light precipitation (think snow) over much of the northern plains and the upper midwest.
Fig 1 -- NEXRAD base refelctivity mosaic for 1738Z, Dec. 20, 2010. From www.weather.gov.
Some media outlets are calling this particular storm an "Alberta clipper".  But, take a look at the current surface map--does it really look like this is coming out of Alberta?
Fig 2 -- Contoured pressure and shaded moisture at the surface at 17Z, Dec. 20, 2010. From the HOOT website.
The primary low pressure center seems to be over eastern Colorado or western Kansas.  That's a good spot for what we call lee-side cyclogenesis, not necessarily an Alberta Clipper type storm (though lee cyclogenesis plays a role in the formation of those, too).  When strong westerly winds aloft are forced over a mountain range, the air is more or less "compressed" between the mountain tops and the tropopause as it moves over the highest peaks.  It then "expands" again on the other side as it comes down the slopes.  When air expands like that, its pressure falls (you have the same amount of air taking  up more space, so the internal pressure is smaller).  We see this all the time on weather charts--the eastern downslope side of the Rockies is a favored location for storm development.  There's also another factor about how any vorticity is going to be stretched as it descends down the mountain slopes as well, and stretching vorticity increases the relative vorticity and you get another reason why lows "spin up" on the lee side of mountains.  But that's the subject for another blog post...

Of course, because we do a pressure correction to mean sea level in our surface pressure maps, sometimes that messes up our pressure field in the mountains.  So just to be sure the low we're seeing above is an actual surface low and not just some natural error due to there being lower pressure at higher terrain, let's look at a surface pressure change map for the past three hours.
Fig 3 -- Surface pressure changes (contoured) over the previous three hours at 17Z, Dec. 20, 2010.  From the College of DuPage website.
We see a broad area of weak pressure falls over the central plains with a rather strongly-concentrated area of pressure rises over eastern Wyoming.  This couplet of pressure rises and pressure falls does suggest a low pressure center halfway in between, right where we expected it to be.  It also suggests the low is moving in a direction connecting the two maxima in pressure rises and falls, so this would imply the low is moving more or less ESE across Kansas.  What's interesting, however, is the difference in magnitude between the pressure rises and the pressure falls.  The maximum pressure falls only seem to be at most 1 mb in the last 3 hours.  However, there are places behind the low where the pressure has risen 5 mb in the last 3 hours (according to these contours).  If there is more pressure rising behind the low than falling ahead of the low, it suggests that this low is weakening--that in the future it won't be as deep as it used to be.  What could be causing this?  Let's check our upper-air support.
Fig 4 -- 300 mb wind and geopotential height analysis at 12Z, Dec. 20, 2010.  From the HOOT website.
Remember our surface low was probably around northern Colorado about this time (12Z).  Well, that puts it almost right underneath that large ridge whose axis stretches down the high plains.  Ridges are usually associated with high pressure and large-scale subsidence--NOT good for low pressure formation at the surface.  And the jet streaks?  If anything, the low pressure center was under the right exit region of that more-or-less straight jet streak in the western US (the same jet streak we saw helping to usher in all that moisture to California the other day...).  The right exit region of a jet streak is normally associated with convergence aloft.  Convergence aloft leads to downward motion and an increase in pressure at the surface.  No wonder our low is weakening--it has virtually no upper-air support.

But maybe conditions will improve?  Let's look at the 24 hour NAM forecast for 300mb winds.
Fig 5 -- 24 hour forecast from the NAM model of 300mb winds and geopotential height at 12Z, Dec. 21, 2010.  From the HOOT website
If anything, the situation has become even worse--there's now a broad ridge across basically the entire country with an anti-cyclonically curved jet streak stretching from the southwest and into the northern plains.  The only favored location for cyclogenesis in this pattern would be off the cost of California somewhere.  Sure, such strong westerly winds over the Rockies may spin up a few surface lows over the high plains under these conditions for the reasons explained above.  But the chances of these lows intensifying or bringing strong winds and really heavy snows is pretty low.  As further evidence of that, here's the 24 hour forecast of surface pressure from the NAM:
Fig 6 -- 24 hour forecast from the NAM of mean sea level pressure (contoured) , temperature (shaded) and winds for 12Z, Dec. 21, 2010.  From the HOOT website.
What happened to our low?  The main part of it went south into eastern Oklahoma (probably due simply to conservation of potential vorticity--another topic that deserves its own blog post).  However, note how it hasn't really strengthened at all.  Our minimum sea-level pressures right now are around 1000mb in the center of the low--here, 24 hours later, the minimum pressure contour is 1012 mb.  No strengthening there.  Some surface troughing did spread north in the model into northern Minnesota and Wisconsin.  It looks like there may be some interaction with the upper air pattern trying to spin something up there (after all, that is under an "exit" region of a jet--though I think the jet is curved the wrong way for this) but it's difficult to really tell.

So yes---there will be some snow with this low as it slowly moves out over the plains.  But winter storm warnings for heavy snow are the worst you'll see.  No blizzard warnings or high wind warnings or anything like that.  This low simply has no support for it to grow...

Saturday, December 18, 2010

Heavy Precip in California

The big weather story for the past few days and right on into today has been the heavy rain and snow falling throughout much of central and southern California.  As noted by Patrick Marsh's (and many other peoples') blog posts and forecast discussions over the past few days, extreme precipitation amounts are being forecast for California.

A check of the most recent hemispheric analysis shows a rather deep 500 mb trough and surface low off of the northwest coast.
Fig 1 -- 12Z analysis of 500mb geopotential height (shaded) and mean sea-level pressure (contoured) on Dec. 18, 2010. From the HOOT website.
The 500 mb trough extends all the way down through northern California and well out into the Pacific.  The height gradient there (as well as the temperature gradient since temperature corresponds to thickness which roughly corresponds to height) hints at a jet stream oriented so as to bring lots of moisture up from the central Pacific and into the California coast.  We can see this on the water vapor imagery.
Fig 2 -- 2330Z GOES-W water vapor image on Dec. 18, 2010. From the HOOT website.
Of course, this image is from nearly 12 hours after the above analysis, but it still shows a strong current of moisture being pulled out of the tropics and into the California coast.  Note the enhancement of the water vapor right after the stream hits the coast--this shows where air is being lifted over the Coast and Sierra Nevada mountains. Since the water vapor channel on the GOES satellites tends to only pick up water vapor in the upper levels of the atmosphere, low-level moisture is often not as visible on these images.  However, when moist air is forced to rise over the mountains, all that very moist air in the lower levels (since, after all, most of that moisture is coming from the ocean surface which is at the lowest possible level...) is lifted up to heights where it becomes more detectable by the GOES imager.  Thus we see an enhancement of water vapor right as the air ascends over the mountains.

A quick look at the radar mosaic confirms large areas of moderately heavy precipitation falling across much of central and southern California.
Fig 3 -- Base reflectivity radar mosaic at 2358Z on Dec. 18, 2010.  From weather.gov.
No wonder the Los Angeles forecast office has flood watches and advisories out for most of their forecast area--southern California doesn't usually get this much rain.  But the ski buffs in the Sierra Nevadas must be loving all this new snow...

Of course, many people (at least, if they were the Seattle media) would blame this large amount of rain on the fact that we're currently in a La Nina pattern with cooler-than-normal sea surface temperatures across much of the equatorial central Pacific.  It's true that the west coast is expecting a wetter-than-average season right now.  The latest images from the Climate Prediction Center show how the sea-surface temperature anomalies have been and are remaining cold over the past few months--a pattern that defines a La Nina event.
Fig 4 -- SST anomalies for several weeks ending the week of Dec. 8, 2010.  Click on the image to see the animation.  From the Climate Prediction Center.
So this current deluge just reinforces the connection between a wetter-than-normal west coast and La Nina conditions over the Pacific.  Is it proof of such a connection in and of itself?  No.  But we'll just say that because we're seeing La Nina conditions, this heavy rain isn't as unexpected as it would have been otherwise.

Thursday, December 16, 2010

How Radar VCPs Really Make a Difference

Since my post(s) talking about different volume coverage patterns or scanning strategies on the NEXRAD radars, I've been asked by several people just what the big deal is about these different VCPs. Why do we care what scanning strategy the radar is using?  And is there really that big of a difference between clear air and precipitation mode?

To answer this, I scoured through my collection of saved radar images to find examples of times when radars changed VCP to illustrate what the difference is between different scanning strategies, particularly between a precipitation and a clear air mode.  Just to review, what makes clear air mode different?
  1. The radar spins more slowly and uses and sometimes uses slightly longer pulse (longer in VCP 31 than in VCP 32--the two clear air mode VCPs) which give it greater sensitivity to reflectivity measurements.  The receiver assembly itself is also tuned to be more sensitive.
  2. The radar has fewer vertical tilts, so it's only focused on the lower parts of the atmosphere.  Even so, the average clear air mode VCP takes around 10 minutes to complete because of the slower rotation rate.
  3. Because of the longer pulse length, the Nyquist velocity (the maximum unambiguously detectable velocity) is lower in VCP 31 (where the Nyquist velocity is only 25 knots.  In VCP 32, this velocity is around 53 knots).  This creates difficulty in measuring wind speeds in high wind events.
  4. Something I haven't mentioned before--because we are trying for increased sensitivity in clear air mode, the receiver is tuned to receive a lot more return power than it normally would (since it takes a lot more power to see finer structures of very low reflectivities).  As such, the receiver tends to saturate with power if the return gets to be too much. Because of this, there's actually a maximum reflectivity value detectable in both clear air mode VCPs.  Any return higher than that won't be measured.  So if you're using a clear-air VCP to measure a hail core (something that's going to return a LOT of power...), the actual maximum reflectivity of the hail core would not be measured--the radar would saturate at its threshold value and that's the maximum value you would see. (I'm not sure about the exact value of this threshold...I'll have to look it up).
Anyhow, here are a couple images to illustrate how the different fields change switching between precipitation and clear-air mode.  First is a slide with the base reflectivity, base velocity, and spectrum width images from the Fort Drum, NY radar on May 5, 2009.  This is when the radar is in VCP 121--precipitation mode.
Fig 1 -- Base moments from the KTYX radar at 1931Z on May 5, 2009.
Refectivity is on the upper left, spectrum width is on the upper right and velocity is on the bottom.  Now, five minutes later on the next volume scan, the meteorologists controlling the Fort Drum radar had changed to clear air mode--VCP 32.
Fig 2 -- Base moments from the KTYX radar at 1936Z, May 5, 2009.
 The differences between the two images in terms of sensitivity are actually pretty significant.  To compare them, I like to click on the images and open them in new browser tabs and then switch between the two tabs.  You can really see what a difference the increased sensitivity makes.  Not only are the finer details of the precipitation resolved, but the aerial coverage of the precipitation area is increased.  Furthermore, because this is VCP 32 (which still has a rather high Nyquist velocity), the increased sensitivity gave a much more coherent picture of the velocity field.  Notice how the spectrum width values (which can be thought of as a measure of how uncertain the velocity measurements are) decreased significantly once the VCP switched to clear air mode--and, consequently, the velocity measurements became much smoother.  Using a clear air VCP in this case works because--
  1. This is a slowly-evolving event--these look like showers and stratiform rain and there is no fast-developing convection here.  Therefore the 10 minute update cycle of a clear air VCP is acceptable.  In a fast-developing convective environment, a lot of important things can happen in those 10 minutes...
  2. Also as a consequence of this not being a very convective environment, the depth of these precipitating clouds is not expected to be too deep.  Therefore we can get away with having fewer vertical tilts of the radar in a clear-air VCP.
  3. The maximum reflectivity values we saw even in precip mode were not that high--certainly no hail cores or very heavy downpours.  Therefore we're not as worried about reaching that saturation reflectivity value and having inaccurate estimates of reflectivity (and consequently rainfall).
Since this was VCP 32 (which has a higher Nyquist velocity than VCP 31), we're not as concerned with the velocity estimates.  But, let's look at another case where a radar went from precipitation mode to VCP 31.  We'll start with the reflectivity measurements from the Oklahoma City radar (KTLX) in VCP 21 (precipitation mode) on January 27, 2010.
Fig 3 -- 0.5 degree base reflectivity from KTLX at 1549Z on January 27, 2010.
We see some light showers in eastern Oklahoma.  The reflectivity values certainly aren't very high and we're not expecting these to be rapidly-evolving or convective showers.  So, the switch was made to VCP 31.
Fig 4 -- 0.5 degree base reflectivity from KTLX at 1555Z on January 27, 2010.
Because it uses a longer pulse, VCP 31 has much more sensitivity than even VCP 32.  You can see this increased sensitivity dramatically shown in the two radar images above.  Note how much the areal coverage of the precipitation increases--it's probably lightly raining (or snowing!) in many more areas than the radar implied in VCP 21.  Notice that we also picked up more clutter near the radar, though.  That's another hazard of clear-air mode VCPs--they are more sensitive to everything--not just precipitation.  So if there is clutter, you'll tend to see MORE clutter in a clear-air mode VCP as opposed to a precipitation mode VCP.

Of course, remember that to get this very high sensitivity by using a longer pulse, the radar in VCP 31 sacrifices its ability to resolve high velocities well.  As a result, compare the velocity image before:
Fig 5 -- 0.5 degree base velocity image from KTLX at 1549Z on January 27, 2010.
To the velocity image after the switch to VCP 31:
Fig 6 -- 0.5 degree base velocity from KTLX at 1555Z on January 27, 2010.
The increase in areal coverage is once again very noticeable.  But where we had a very nice, coherent velocity field in VCP 21, we now suffer from some chaotic-looking velocity measurements in VCP 31. Since the wind speed above the ground are probably well over the Nyquist velocity of about 25 knots in VCP 31, the radar starts aliasing the velocities (note how we start getting alternating bands of red and green instead of simply two coherent blobs of red and green on either side of the radar.  Trying to sort out actual velocities from this image can be difficult (but not impossible).  But, if these really are just light showers, then we're probably not as concerned with the wind field.  In that case, the enhanced reflectivity sensitivity of VCP 31 may be desired, particularly if we're trying to find where exactly it's raining--even only a little.

So there are a couple of graphical examples of the difference between clear-air and precipitation VCPs.  I hope this helps those people who were asking me about this to better visualize the difference.  As always, if you have any questions or comments, please feel free to email me at lukemweather@gmail.com.

Tuesday, December 14, 2010

MD for the NW

It's not often at all that you see something like this out here in the Pacific Northwest:
Fig 1 -- Mesoscale Discussion 2128 from the Storm Prediction Center on Tuesday, Dec. 14, 2010.
A mesoscale discussion for this region?  Not only that, but a discussion about an isolated tornado threat for the Puget Sound lowlands down through the Willamette Valley? Very intriguing.  To see the full text of the mesoscale discussion, you can visit the SPC's page for it here.  They note that there probably won't be a watch because the threat is so limited.  However, there already has been a tornado report near Salem, OR.

Right now, there's just some scattered showers moving through the region.
Fig 2 -- 0.5 degree base reflectivity from KRTX radar at 2207Z, Dec. 14, 2010.
Some stronger convective elements are present, particularly in the storm east of Portland.  Some of the showers coming over the Coast Range to the west are also showing stronger cores.  Added orographic lift as the southwesterly winds aloft are forced to rise over the low mountains there seems to be helping to get some of the showers convecting.  A view looking south from the top of the Atmospheric Science building at the University of Washington in Seattle shows the rain to the south (but clear skies over Seattle!).
Fig 3 -- Southward view toward Mount Rainier (though you can't see it) from the roof of the UW Atmospheric Sciences building at 2:14 PM PST.  From the UW Northwest Observations website.
So why this isolated tornado threat for the area?  The big story is the wind shear.  Here's this morning's 12Z sounding out of Salem, OR.
Fig 4 -- 12Z sounding from Salem, OR, on Dec. 14, 2010.  From the SPC website.
Focusing on the winds, it's clear there's a lot of wind shear, both directional and speed-wise, going on in the lowest levels.  Winds go from southerly at 10 knots at the surface to westerly at 40 knots at 850mb.  That's a fairly large amount of wind shear, particularly for this region of the country.  In terms of instability, the lapse rate is conditionally unstable--it's steeper than a moist adiabat but not quite as steep as a dry adiabat.  The absence of any strong temperature inversion also helps make this a tempting sounding for instability.  There's clearly a lot of moisture and with just a little warming at the surface, a fair amount of CAPE could be generated.  There are many ways to get warming at the surface.  One way is through warm air advection--wouldn't you know that the surface to 850mb directional wind shear represents a veering of winds with height.  That's a sign of warm air advection going on.  You can also see in the sounding above that the SPC analysis tools have  picked up on this as well--the vertical bar chart labeled "Inferred Temperature Advection" shows red bars (indicating warm air advection) at low levels with blue bars (indicating cold air advection) in the mid levels.  Warming below and cooling above will tend to destabilize the lapse rate.

Of course another thing that will help warm the surface is clear areas where sunlight can get through.  Here's a look at the latest visible satellite image over Washington.
Fig 5 -- Visible satellite image over Washington state at 2215Z, Dec. 14, 2010.  From the College of DuPage website.
There's still a lot of stratus cloud cover over southwestern Washington and northern Oregon (you could even see that on the webcam image above).  But, Puget Sound is definitely clear.  There's also some isolated patches of clearing on the northern Oregon coast between areas of very convective-looking clouds.  This makes sense as strong convective updrafts usually have compensating downdrafts (or subsidence) that tend to clear the air around the storm.  This clearing lets sunlight get through and can further help to warm things up.

So, perhaps we'll see some more destabilization this afternoon--there's still a few hours of sunlight left.  And with that wind shear, we'll have to watch extra carefully to see if we spot any rotation.

Sunday, December 12, 2010

Questions From Odd Velocity Patterns on the DMX Radar

I noticed a lot of pretty patterns last night looking at the radar data as the snow moved through the midwest.  Some of the best came from the Des Moines radar (KDMX).  They were running in VCP 31 for much of last night--a good VCP for observing the structure of snow, but a bad one for getting good velocity data.  Here's the reflectivity field at 0.5 degrees
Fig 1 -- 0.5 degree base reflectivity from KDMX at 405Z, Dec. 12, 2010.
Excellent structure with multiple snow bands can be seen.  However, a look at the velocity image for this time shows a pretty complicated picture.
Fig 2 -- 0.5 degree base velocity from KDMX at 405Z, Dec. 12, 2010.
In an earlier post, I talked about how VCP 31 had issues measuring velocity because it used a longer radar pulse.  In fact, the maximum unambiguous velocity it can detect is around 11.5 meters per second, or around 22 knots.  Looking at the above image, you can see I've also overlayed surface observations.  Surface winds are already at 20-30 knots across much of Iowa--so we're already approaching the threshold.  Wind speeds typically increase with height as well.  So, as the radar beam goes out (and up in height), it's going through faster winds--faster than its maximum unambiguously detectable velocity.  That's why we're seeing so many layers of opposing color here--the radar is "aliasing" velocities higher than 22 knots to the opposite direction.  Then the radar aliases again once the velocities get past 44 knots--only now it's aliasing back to the correct direction, but at the wrong speed.  Each multiple of that maximum velocity (the Nyquist velocity) triggers another aliasing of the velocities.  And based on wind profiles, we get well above 70 knots of wind aloft--that's at least three aliases of velocity going on in that image above.

Let's go to a higher tilt (3.5 degrees) and zoom into the velocity image a bit.
Fig 3 -- 3.5 degree base velocity from the KDMX radar at 412Z, Dec. 12, 2010.
Here you can see there is a lot going on in the lowest 10,000 feet of the atmosphere.  We know based on the surface observations that the winds near the surface are out of the north-northwest.  This means we'd generally expect green colors (meaning incoming air) to the north and red values (meaning outgoing air) to the south.  Of course, the winds are so close to the Nyquist velocity that they're almost immediately aliased--in fact most of the large red area to the north of the radar should actually be green.  And that green blob directly north of the radar?  That's where the velocities have been aliased again--they're back to going the right direction, but the magnitude is off by a factor of two times the Nyquist velocity since they've been aliased twice.  Similar things are happening to the south of the radar, where everything should probably be red.

Anyhow, one thing we can do based on the image is draw the wind directions based on the location of the zero isodop.  The zero isodop is the line of "zero" velocities (the gray line) that separates the inbound and outbound air (it can actually be any place where zero velocities are showing up but the zero isodop is usually the one I just described).  The zero isodop follows the line where the wind velocity is perpendicular to the radar beam.  Since the winds are roughly north to south in the image above, when the radar beam scans to the east or west, there will be a point when the winds are blowing perpendicular to the radar beam.  Since the radar can only measure air moving toward or away from the radar, it measures no velocity when the air is moving perpendicular to the radar beam.  Thus we get a line of zero velocity in every radar image that represents all the places where the wind is blowing perpendicular to the radar beam.  We can see that in the above image, the "true" zero isodop is the gray line of zero velocities that squiggles all the way across the image and goes through the radar.

Since along this line we know the direction the radar beam is pointing, we can draw arrows perpendicular to the radar beam along the zero isodop and extract wind directions.  Since the radar beam increases its height off the ground the further it goes out, we can extract a vertical profile of winds from this zero-isodop analysis. I've drawn a few sample wind direction arrows in the image below (I make no claims about the wind velocity, just the direction).
Fig 4 -- 3.5 degree base velocity image from KDMX at 412Z, Dec. 12, 2010.  Annotated with wind direction arrows.
We can see that because the zero isodop "squiggles", the wind directions also change with height.  In fact, it looks like there are three different layers here--immediately near the radar, the winds are out of the north-northwest.  Then there seems to be this small layer (which works out to between 2500-5500 feet, about) where winds are more out of the north-northeast.  Then above that, there's another layer where winds are more northerly.  Or course, between these layers there is directional wind shear going on.  I've labeled these wind shear places (where the wind is changing direction) in the image below:
Fig 5 -- 3.5 degree base velocity image from KDMX at 412Z, Dec. 12, 2010.  Annotated with wind direction arrows and locations of directional shear.
Now, from our thermal wind arguments, we know that veering winds (winds turning clockwise with height) tend to imply warm air advection and backing winds (winds turning counterclockwise with height) tend to imply cold air advection.  Well, that's kind of funny.  We know for a fact at the surface that lots of cold air advection is going on.  Temperatures are in the teens and single digits in northwestern Iowa and in the mid 30s in central Iowa.  With north-northwesterly winds near the surface, this would mean that cold air is moving into central Iowa and we're seeing cold air advection.  However, these wind shifts would imply that there is some sort of weak warm air advection going on just above the surface.

What's causing this?  To be honest, I'm really not sure.  Some sort of latent heat release from melting precip?  The surface temperatures in central Iowa are very close to freezing, or even slightly above it.  Perhaps some of the snow is melting right before it hits the surface and this is causing some kind of warming.  It's an intriguing possibility.  We don't see much of a melting layer in the reflectivity image:
Fig 6 -- 3.5 degree base reflectivity image from KDMX at 412Z, Dec. 12, 2010. .
Perhaps there is some slight enhancement of the reflectivity close to the radar (i.e. at low levels) which would indicate melting precipitation.  But it's difficult to tell.  Another thought is that this could be the so called "warm conveyor belt" of air wrapping around the upper-level trough bringing slightly warmer air (most likely from the northeast) over the shallow surface layer.  This seems slightly more plausible--after all this is the wrap-around region of the cyclone--but at the same time I would have expected the warm conveyor belt to be a bit higher off the ground than 2500 feet.  But, perhaps this is the case.

The radar can also somewhat verify itself when it comes to areas of wind shear.  There's another product called the spectrum width product.  Think of this as a standard deviation of velocity measurements at a given point.  Since the radar processor takes an average of several different measurements to determine the velocity at a particular point, the spectrum width is more or less just the standard deviation of all those measurements.  Therefore areas of high spectrum width tend to be areas of higher uncertainty with regards to what the velocity is.  This could mean that the wind velocity is changing rapidly in that bin, like in areas of high shear or turbulence.  Here's the spectrum width image at the same time as the above images.
Fig 7 -- 3.5 degree spectrum width image from KDMX at 412Z, Dec. 12, 2010. 
This is actually the image the originally inspired this post, as I thought it a remarkably amazing image.  It's not often on radar that you see a nicely spiraling pattern like that (well, outside of a hurricane).  There are distinct bands where the spectrum width is peaking.  So, I thought, where do these spectrum width maxima cross the zero isodop?  I added black lines to the annotated velocity image to show where this happens.
Fig 8 -- 3.5 degree velocity image from KDMX at 412Z, Dec. 12, 2010.  Annotated with wind directions, areas of wind shear, and lines corresponding to maximum bands in spectrum width.
Oddly enough, those maximum bands of spectrum width seem to correspond almost exactly to the locations where we're seeing that directional wind shear (and inferred temperature advection) in the velocity image.  But this makes sense--if the wind directions are changing rapidly with height, the radar's estimate of the wind velocity may not be as good.  So the spectrum width image seems to confirm these areas of wind shear.

What I can't exactly describe is why the spectrum width bands seem to be spiraling.  If there was one uiniform layer of veering winds and then another uniform layer of backing winds, I would have expected concentric circles, one at each level where the velocities were changing, and not a spiral.  This probably says that there's some sort of tilt to these layers in the horizontal, but I can't say much more beyond that.  If anyone else has any ideas, by all means, please let me know...

This sort of isodop analysis to determine winds is actually used by an algorithm in the radar product generator to create a product called VAD winds.  It uses model soundings (and some math on the velocity field) to calculate a vertical profile of winds (both speed and direction) from the radar (though it assumes that the winds are horizontally uniform across the radar domain--not always the best assumption).  The VAD profile can be looked at in "meteogram" format like shown below.
Fig 9 -- VAD winds from KDMX at the times listed on Dec. 12, 2010.
The profile corresponding to the volume scan of all the images above is the profile that is furthest to the right.  And what do you know--winds start out out of the northwest at the surface, then veer to becoming northeasterly from about 2000-5000 feet, then back to more northerly above that.  So the VAD winds also confirm these slight directional wind shifts--and verify my analysis of the velocity image.

So my analysis of the image seems to be consistent.  But my interpretation of the images is rather loose.  What really is causing these subtle variations in the wind direction (since it's not just showing flat out cold air advection)?  Is there some sort of warming due to precipitation melting near the surface?  Or is this wind shift simply due to friction with the land causing odd wind direction shifts near the surface?  Or is that the warm conveyor belt showing up as subtle shifts in the velocity profile?  I have my random thoughts, as explained above.  I'd welcome any thoughts anyone else has.  I just thought that there was a really pretty spiral image in the spectrum width and this is where it led me...

Also, just for comparison, the sounding from Omaha from 00Z (four hours before and over 100 miles to the west--well into the cold air) looked like this:
Fig 10 -- 00Z sounding from Omaha, NE on Dec. 12, 2010.  From the HOOT website.
They also show a change from north-northwesterly to northerly winds somewhat above the surface.  Does this confirm the warm conveyor belt hypothesis?  The temperature profile does show warmer temperatures above the cold layer near the surface.   Lots of puzzles with this one.