Hostname: page-component-77c89778f8-vpsfw Total loading time: 0 Render date: 2024-07-16T23:42:37.006Z Has data issue: false hasContentIssue false

Human-in-the-Loop Design with Machine Learning

Published online by Cambridge University Press:  26 July 2019

Pan Wang*
Affiliation:
Imperial College London, United Kingdom;
Danlin Peng
Affiliation:
Imperial College London, United Kingdom;
Ling Li
Affiliation:
University of Kent, United Kingdom;
Liuqing Chen
Affiliation:
Imperial College London, United Kingdom;
Chao Wu
Affiliation:
Zhejiang University, China
Xiaoyi Wang
Affiliation:
Zhejiang University, China
Peter Childs
Affiliation:
Imperial College London, United Kingdom;
Yike Guo
Affiliation:
Imperial College London, United Kingdom;
*
Contact: Wang, Pan, Imperial College London, Dyson school of Design Engineering, United Kingdom, pan.wang15@imperial.ac.uk

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Deep learning methods have been applied to randomly generate images, such as in fashion, furniture design. To date, consideration of human aspects which play a vital role in a design process has not been given significant attention in deep learning approaches. In this paper, results are reported from a human- in-the-loop design method where brain EEG signals are used to capture preferable design features. In the framework developed, an encoder extracting EEG features from raw signals recorded from subjects when viewing images from ImageNet are learned. Secondly, a GAN model is trained conditioned on the encoded EEG features to generate design images. Thirdly, the trained model is used to generate design images from a person's EEG measured brain activity in the cognitive process of thinking about a design. To verify the proposed method, a case study is presented following the proposed approach. The results indicate that the method can generate preferred designs styles guided by the preference related brain signals. In addition, this method could also help improve communication between designers and clients where clients might not be able to express design requests clearly.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2019

References

Anon, Prisma, Available at: https://prisma-ai.com/ [Accessed November 5, 2018].Google Scholar
Basso, F., et al. (2014), “Why people drink shampoo? Food Imitating Products are fooling brains and endangering consumers for marketing purposes”. PloS one, Vol. 9 No. 9, p. e100368.Google Scholar
Carroll, J.M. (2002), “Scenarios and design cognition”. In Proceedings IEEE Joint International Conference on Requirements Engineering. eeexplore.ieee.org, pp. 35.Google Scholar
Cooley, M. (2000), “Human-centered design”. Information design, pp. 5981.Google Scholar
Dong, H., et al. (2017), “Semantic Image Synthesis via Adversarial Learning”, In 2017 IEEE International Conference on Computer Vision (ICCV). Available at: http://doi.org/10.1109/iccv.2017.608.Google Scholar
Dosovitskiy, A. and Brox, T. (2016), “Generating Images with Perceptual Similarity Metrics based on Deep Networks”, arXiv [cs.LG]. Available at: http://arxiv.org/abs/1602.02644.Google Scholar
Efros, A.A. and Freeman, W.T. (2001),. “Image quilting for texture synthesis and transfer”, In Proceedings of the 28th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’01. Available at: http://doi.org/10.1145/383259.383296.Google Scholar
Fei-Fei, L., Deng, J. and Li, K. (2010), “ImageNet: Constructing a large-scale image database”, Journal of vision, Vol. 9 No. 8, pp. 10371037.Google Scholar
Gatys, L.A., Ecker, A.S. and Bethge, M. (2016), “Image Style Transfer Using Convolutional Neural Networks”, In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Available at: http://doi.org/10.1109/cvpr.2016.265.Google Scholar
Goodfellow, I., et al. (2014),“Generative Adversarial Nets”, In Ghahramani, Z., et al. , eds. Advances in Neural Information Processing Systems Vol. 27. Curran Associates, Inc., pp. 26722680.Google Scholar
Horikawa, T. and Kamitani, Y. (2017), “Generic decoding of seen and imagined objects using hierarchical visual features”, Nature communications, 8, p. 15037.Google Scholar
Ioffe, S. and Szegedy, C. (2015), “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, arXiv [cs.LG]. Available at: http://arxiv.org/abs/1502.03167.Google Scholar
Isola, P., et al. (2017a), “Image-to-image translation with conditional adversarial networks”, arXiv preprint. Available at: http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf.Google Scholar
Isola, P., et al. (2017b), “Image-to-Image Translation with Conditional Adversarial Networks”, In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Available at: http://doi.org/10.1109/cvpr.2017.632.Google Scholar
Johnson, J., Alahi, A. and Fei-Fei, L. (2016), “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, In Lecture Notes in Computer Science. pp. 694711.Google Scholar
Li, C. and Wand, M., (2016), “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks”, In Lecture Notes in Computer Science. pp. 702716.Google Scholar
Maas, A.L., Hannun, A.Y. and Ng, A.Y. (2013), “Rectifier nonlinearities improve neural network acoustic models”, In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.693.1422 [Accessed December 14, 2018].Google Scholar
Odena, A., Olah, C. and Shlens, J. (2016), “Conditional Image Synthesis With Auxiliary Classifier GANs”, arXiv [stat.ML]. Available at: http://arxiv.org/abs/1610.09585.Google Scholar
Palazzo, S., et al. (2017), “Generative Adversarial Networks Conditioned by Brain Signals”. In 2017 IEEE International Conference on Computer Vision (ICCV). Available at: http://doi.org/10.1109/iccv.2017.369.Google Scholar
Plassmann, H., Ramsøy, T.Z. and Milosavljevic, M. (2012), “Branding the brain: A critical review and outlook”, Journal of consumer psychology: the official journal of the Society for Consumer Psychology, Vol. 22 No. 1, pp. 1836.Google Scholar
Salimans, T., et al. (2016), “Improved techniques for training gans”, In Advances in Neural Information Processing Systems. pp. 22342242.Google Scholar
Shen, G., et al. (2017), Deep Image Reconstruction from Human Brain Activity. Available at: http://doi.org/10.1101/240317.Google Scholar
Shen, G., et al. (2018), End-To-End Deep Image Reconstruction from Human Brain Activity. Available at: http://doi.org/10.1101/272518.Google Scholar
Spampinato, C., et al. (2017), “Deep Learning Human Mind for Automated Visual Classification”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 68096817.Google Scholar
Spence, C. (2016), “Neuroscience-Inspired Design: From Academic Neuromarketing to Commercially Relevant Research”, Organizational Research Methods, pp. 124.Google Scholar
Tirupattur, P., et al. (2018), “ThoughtViz: Visualizing Human Thoughts Using Generative Adversarial Network” In 2018 ACM Multimedia Conference on Multimedia Conference. ACM, pp. 950958.Google Scholar
Velasco, C., Woods, A.T. and Spence, C. (2015), “Evaluating the orientation of design elements in product packaging using an online orientation task”, Food quality and preference, Vol. 46, pp. 151159.Google Scholar
Vicente, K.J. (2013), The Human Factor: Revolutionizing The Way People Live with Technology, Routledge.Google Scholar
Yu, S., et al. (2018), “Generative Creativity: Adversarial Learning for Bionic Design”, arXiv [cs.CV]. Available at: http://arxiv.org/abs/1805.07615.Google Scholar
Zhu, J.-Y., et al. (2017), “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks”, In 2017 IEEE International Conference on Computer Vision (ICCV). Available at: http://doi.org/10.1109/iccv.2017.244.Google Scholar