There are presently many articles and posts around encouraging increasing data center temperatures. ASHRAE has widened the temperature and humidity recommendations for data centers, and many blogs are recommending and discussing drastically increased temperatures. Lately the discussions have been around the work environment of over 110F in data centers. Note, that these high temperatures are in new or re-designed data centers with hot/cold isle separation (hot isle containment or cold isle containment).
Existing data centers should definitively investigate possible temperature increases as well as humidity range broadening. However any such temperature change should be made slowly and with considerable monitoring. In traditional data centers it is easy to get upside down with temperature changes.
Keep in mind that the recommendations are for equipment inlet temperatures. The temperatures over 90F that are being talked about are rack outlet temperatures. Invest in at least some moderate temperature monitoring for the rack inlets before making any changes. Get at least some trending data before changing temperatures and change them gradually. You want to make sure that you are not getting a lot of hot air recirculation through the equipment, particularly top of rack and end of isle trouble spots. If your data center doesn't already have them, invest in some rack blanking panels to prevent recirculation within the rack.
Some other points to watch for are that in some traditional data centers with CRA/C units against the walls you might find a slight problem with the location of the temperature sensor for the CRA/C units. Many of these units are designed for open returns usually an open top for down draft units. Some of these units have their temperature sensor right on top right in the return air stream. So these units are measuring and using for control, air that is not the rack inlet supply temperature.
You may not see much savings though depending on the type of cooling involved. If you are using chilled water you might want to experiment with increasing the water temperature once you have the data center temperature stabilized. You will want to keep monitoring the data center rack inlet temperatures as you increase the chilled water temperatures. The highest savings will come from economizers, either chilled water or air systems. As the data center temperature is increased the effectiveness of economizers increases.
Ultimately you should have a real time power monitoring system in the data center in addition to the temperature monitoring system, before making any changes. This will help insure that there is some savings as there will be a temperature that proves to be most efficient for your data center and equipment, above which efficiency will start to decrease.
Use total data center (including cooling) energy use to find the best temperature for your data center. Do not use PUE for this. PUE can be deceptive when changing data center temperature. The problem comes from the fans in the IT equipment. They are usually variable speed, temperature controlled, such that the fan speed starts to ramp up as the inlet temperature goes above 78F. Most computers have 5 to 10 fans leaving about 300 fans per rack. What happens is you decrease the infrastructure side of PUE (cooling) at the same time you increase the IT side (IT equipment fans) which shifts much of the cooling costs to the IT equipment which is much less efficient. This will result in lowered PUE but increased overall energy utilization. Using the overall energy utilization will avoid this situation.
You might also investigate moving the CRA/C units to be aligned with the hot isles if they are not already so aligned. Another option is to install ducted returns from the hot isles which may be more efficient and/or cheaper to implement. It may be possible to use the data center drop ceiling for a ducted return plenum. These changes will allow you to increase temperature further and maintain more accurate inlet temperatures.