Classical search algorithms such as A* or IDA* are useful for computing optimal solutions in a single pass, which can then be executed. But in many domains agents either do not have the time to compute complete plans before acting, or should not spend the time to do so, due to the dynamic nature of the environment. Extensions to A* such as LRTA* address this problem by gradually learning an exact heuristic function, but the learning process is quite slow. In this paper we introduce Partial-Refinement A* (PRA*), which can fully interleave planning and acting through path abstraction and refinement. We demonstrate the effectiveness of PRA* in the domain of real-time strategy (RTS) games. In maps taken from popular RTS games, we show that PRA* is not only able to cleanly interleave planning and execution, but it is also able to do so with only minimal losses of optimality.