Special Guest: Anouk Wipprecht

102298562-spider_dress-1910x1000

What does fashion lack? “Microcontrollers” according to Hi-Tech Fashion Designer and Innovator Anouk Wipprecht. Anouk works in the emerging field of “Fashion-Tech”; a rare combination of fashion design combined with engineering, science and interaction/user experience design. Producing an impressive body of tech-enhanced designs bringing together fashion and technology in an unusual way, she creates technological couture. With systems around the body that tend towards artificial intelligence, projected as ‘host’ systems on the human body, her designs move, breath, and react to the environment around them. Strangely ahead of her time, Wipprecht combines the latest in science and technology to make fashion an experience that transcends mere appearances. Anouk Wipprecht researches how we can interface in new ways with the world around us through our wardrobe.


Special Guest: MakeFashion

unspecified

HACKING THE RUNWAY: WEARABLE TECHNOLOGY MEETS HIGH FASHION.

Launched in June 2012 by a trio of Calgarians, MakeFashion has produced over 60 wearable tech garments and showcased at over 40 international events. We introduce fashion designers and artists to the exciting world of wearables through a series of informative, hands-on, designer-lead workshops. Our annual gala in Calgary, Canada debuts new collections of innovative fashion technology combined with theatre and performance. MakeFashion has showcased unique projects at fashion and maker events around the world including New York, Rome, and Shenzhen.


Special Guests: Jane Tingley, Cindy Poremba and Marius Kintel

anyWare will be a distributed sculpture that will include three physical objects placed in three different physical locations. These objects will be individually connected to the Internet, where they will also have a virtual presence.  The virtual objects will be accessible by a web interface that permits an on-line overview of the three installation spaces and a platform for interaction.  The objects – both virtual and physical – will mirror each other (either symmetrically or asymetrically) and will simultaneously respond to people who interact with them in any physical or virtual location. For example, if someone walks into the gallery in Montreal and interacts with the object, the sensors on that object will transmit the information to the server hosted in the cloud, which will house the logic for the interaction.  The server will then transmit to all three objects a response to the stimuli.  This way all three objects will be moving in an identical manner.