Assistive robots are used by various individuals with medical disabilities to help with tasks such as movement. A subset of these individuals are patients with the locked- in syndrome; these patients cannot communicate with a robot through traditional means, such as with a joystick. This work designs a navigation scheme which allows for an assistive robot to be controlled by patients suffering locked-in syndrome, thus allowing the patient to move about their environment. Navigation is accomplished using an algorithm that combines autonomous robot movement and communicated commands from the patient. To bridge the communication gap between the patient and robot, naturally occurring error-related potentials are used. These ERPs can be used to establish communication between the patient and robot without relying on the patient interacting with physical stimuli, such as a keyboard or joystick. The commands commu- nicated to the robot comes in the form of a binary: correct or incorrect command in response to the movements of the robot at an intersection in a structured building. While more complicated commands can be classified from event-realted potentials (ERPs), such as directional movement, this simple command allows for fast reliable classifications and responses. To make up for the lack of complexity from patient commands, the robot is leveraged to handle tasks such as wall avoidance, while a navigation algorithm is designed to minimize the inputs required by the user when taking a commonly traveled path. The benefits of using a semi- controlled robot for navigation vs a fully autonomous robot is compared in terms of the time taken to discover and navigate an initial path to a destination. This work serves as a proof of concept for the proposed semi-autonomous navigation scheme to validate future work into the proposed design.