This research mainly focused on the created interface between visions of an object around the environment for a vision disability. Created interface with true visionary capabilities used object detection workflow. Add also garbed object protocol added feature for this interface. Used two collaborated workflows, this interface crated a solution for real-time problems for the visual disability. Each part of the interface converted user-friendly feedback output. Each section of object detection identified used model wasfaster_rcnn_inception_v2_cocc _2018_01_28 and it already trained a model by Microsoft. For defined object outside environment used text to speech convert program to coco model translated for 3D mono sound protocol and it generated frequency variable 3D sound selection for the user to understated object around the objects. To father understated this 3D mono sound waves converted digital format to analog format used Pulse with Modulation technique. Each signal validatedby protocol digital variable generator for identified which side object detected on the system.