The Cloud evolves everyday to give better services like on-demand storage and on-demand compute resources. The increase of the demand of Cloud services raises environmental questions. The electricity consumed per year for running these services is more important than the annual electrical consumption of India. There are studies proposing solutions to lower the power consumption of the data centers by consolidating the system and turning off some servers. These solutions focus on the Cloud to save energy and do not take the users into consideration. The solution we propose to this energy problem is to offer the user a simple control to manage the energy impact of her application in the Cloud. Our study is focused on scientific data-intensive applications. She can choose between energy efficient and performance execution mode which results in more or less resource allocation in the Cloud for her application. Less resource allocation allows a better consolidation of the virtual machines, and favors the shutting down of more unused physical servers. For the evaluation we deployed our solution on Grid'5000, a French platform for experimenting distributed systems. As benchmark we ran Montage, a workflow dealing with astronomic images of the space. The evaluation shows promising results in term of power consumption. The execution time of the workflow is longer in energy efficiency than in performance but the energy saved is worth it. Our solution can thus provide trade-offs between energy and performance which are different than the one usually provided.