# AnySurface API DRAFT ## glossary: things to need to name: * what is currently refered to as a `match`, this is an important object in the system and deserves a good name. ## Configuration description of what configuration options are necessary and important. * html target * imageLoader * Assignment Sections TODO: fill out configuration object ``` javascript const config = { canvasID: "canvas", } const mySurface = new AnySurface(config); ``` ## Initialization how to instantiate and start interacting with anySurface objcts. initialization process description: * Camera Selection * Camera Alignment * Calibrate * background * Assignment Event driven callback registration system of some kind... ``` mySurface.on(AnySurface.CalibrateComplete, () => { // do the work }); mySurface.on(AnySurface.init, () => { model.start(); }); mySurface.init(); ``` ## Assignment Generation of surface elements that will act as computational objects or triggers. user creates sections, that deliniates a 'training zone' for object recognition. ``` mySurface.setSections(sections); mySurface.previewAssignment(); -> cause projection to show assignment grid ``` ## Detection Returns a list of 'matches' detected by anySurface. description of data struction returned match: ```javascript { imageBlob: {...} trainingBlob: { optionalQrCode, image(RBG), mask, } position: {...} // cords for both projector space and canvas space QR: {...} // meta data about object } ``` ```javascript mySurface.detect(elements => { // interact with matches to perhaps update model }); ``` ### Monitoring basic event notification for when anySurface detects the objects in the projection have changed. this runs the detection routine. ## Positioning api for mapping between camera coords to screen coords ```javascript const screenXY = mySurface.getCanvasPositionFromCamera(camX, camY); ``` api for mapping between screen coords and camera?