In the Real-Time Agent-Centered Search (RTACS) problem, an agent has to arrive at a goal location while acting and reasoning in the physical world. Traditionally, RTACS problems are solved by propagating and updating heuristic values of states visited by the agent. In existing RTACS algorithms the agent may revisit each state many times causing the entire procedure to be quadratic in the state space. We study the Iterative Deepening (ID) approach for solving RTACS and introduce Exponential Deepening A* (EDA*), an RTACS algorithm where the threshold between successive Depth-First calls is increased exponentially. EDA* is proven to hold a worst case bound that is linear in the state space. Experimental results supporting this bound are presented and demonstrate up to 10x reduction over existing RTACS solvers wrt distance traveled, states expanded and CPU runtime.