YONEZAWA, Tomoko |
---|
Faculty, Department/Institute
- Faculty of Informatics Department of Informatics
Academic status (qualification)
- Professor Apr. 1,2017
Undergraduate Degrees・University
- Keio University Environmental Information1999 Graduated
Graduate Degrees・University
- Keio University Master's Degree Program 2001 Completed
- Nagoya University Doctor's Degree Program 2007 Completed
Academic Degrees
- Doctor of Information Science Mar. 2007 Nagoya University
Homepage Address, E-mail Address
- Homepage Address:http://www.res.kutc.kansai-u.ac.jp/~yone/
Research fields
Research fields | keyword |
---|---|
Perception information processing/Intelligent robotics | |
Human interface | |
Speech processing | |
User interface |
Research topics
research topic | |
---|---|
Study theme state | Individual Research |
research duration | 2013 ~ 2016 |
Research Programs | Grant-in-Aid for Scientific Research |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Joint research within Japan |
research duration | 2012 ~ 2014 |
Research Programs | Grant-in-Aid for Scientific Research |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Joint research within Japan |
research duration | 2012 ~ 2012 |
Research Programs | Consignment Research (Researchers are entrusted the research by some institution.) |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Joint Research within Institution |
research duration | 2012 ~ 2013 |
Research Programs | Other Researches |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Individual Research |
research duration | 2011 ~ 2011 |
Research Programs | Other Researches |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Joint research within Japan |
research duration | 2009 ~ 2011 |
Research Programs | Grant-in-Aid for Scientific Research |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Individual Research |
research duration | 2008 ~ 2010 |
Research Programs | Grant-in-Aid for Scientific Research |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Other |
research duration | 2008 ~ 2009 |
Research Programs | Other Researches |
keyword | |
Research field | |
Research Topics Overview |
research topic | |
---|---|
Study theme state | Joint research within Japan |
research duration | 2015 ~ 2017 |
Research Programs | Grant-in-Aid for Scientific Research |
keyword | |
Research field | |
Research Topics Overview |
Research Activities
- Born in Yokohama, completed Bachelor of Environmental Information Department in Keio University at 1999, and Master of Media and Governance in Graduate School of Keio University at 2001. Employed in NTT Japan, and worked in Cyber Space Laboratories from 2001 to 2003. Externally assigned to ATR Japan in 2003, and have worked in Intelligent Robotics and Communication Laboratories from 2003 to 2011. Earned Doctor of Information Science in Nagoya University at Mar. 2007. Now she is an associate professor in Kansai University.
Research Career
- NTT Cyberspace Laboratory 2001/4/7~2003年/6/30
- ATR Intelligent Robotics and Communication Labs. 2003/7/1~2011年/3/31
Awards
- Human Interface Society 2011 Award of Japan (tech. meeting) Mar. 1,2011(Human Interface Society of Japan)
- Best Paper Award Sep. 26,2010(CASEMANS 2010)
- Best Paper Award May 11,2009(CASEMANS 2009)
- Impressive Experience Award Dec. 4,2009(HAI 2010)
- Impressive Experience Award Dec. 4,2008(HAI 2008)
- Finalist of Best Application Award Sep. 24,2008(IROS2008)
- Super creator Oct. 29,2009(IPA Exploratory IT Human Resources Project (MITOH Program))
- Interactive Presentation Award Mar. 3,2008(Interaction 2008)
- ATR International Incentive Award Mar. 17,2009(ATR)
- Apr. 2008(Nagoya University)
- SFC Student Award Mar. 2001(Keio University)
- Honerable Mentioned Paper Aug. 9,2013(iHAI2013)
- Outstanding Presentation Award Sep. 11,2014(Human Interface Symposium 2014)
- IVRC2013 Solidray Research Lab. Award (Team green lab) Oct. 2013(IVRC 2013 Organization)
- IVRC2013 Jury's Special Award (Team green lab) Oct. 2013(IVRC 2013 Organization)
- IVRC elimination exposition 3rd place (Team green lab) Sep. 2013(IVRC 2013 Organization)
- IVRC elimination exposition 1st place (Team Ninoude) Sep. 10,2015(IVRC 2015 Organization)
- IVRC elimination exposition 6th place (Team Aoi-chan) Sep. 10,2015(IVRC 2015 Organization)
- Impressive Poster Award Dec. 2015(HAI Symposium 2014)
- IVRC2015 Hacosco Award (Team Aoi-chan) Oct. 24,2015(IVRC 2015 Organization)
- IVRC2015 Meiwa-Denki President's Award (Team Aoi-chan) Oct. 24,2015(IVRC 2015 Organization)
- IVRC2015 Overall Victory Award (Team Ninoude) Oct. 24,2015(IVRC 2015 Organization)
- OGIS Lab. Jury's Special Award (Team Modality) Nov. 12,2015(OGIS Lab. Inc. (2015 software contest))
- Best Student Paper Award Oct. 6,2016(HAI 2016)
- Student excellent prensentation award Sep. 26,2016(IPSJ Kansai 2016)
- Student excellent prensentation award Sep. 26,2016(IPSJ Kansai 2016)
- Research Fellowship for Young Scientists(DC2) Apr. 1,2017(JSPS)
- IPSJ Yamashita SIG Research Award Mar. 16,2017(IPSJ)
- Outstanding Presentation Award Sep. 6,2017(Human Interface Symposium 2017)
- Student excellent prensentation award Sep. 25,2017(IPSJ Kansai 2017)
- Student encouraging prensentation award Sep. 25,2017(IPSJ Kansai 2017)
- Student encouraging award Aug. 20,2017(IPSJ UBI Tech Meeting)
- Research encouraging award Dec. 2,2017(IEICE ET Tech Meeting)
- MVE Award Jan. 18,2019(IEICE MVE Tech Meeting)
- Young Researcher Award Jul. 2018(DICOMO 2018)
- Outstanding Presentation Award Sep. 4,2019(Human Interface Symposium 2019)
- Encouraging presentation award Sep. 23,2019(IPSJ Kansai 2019)
- Human Communication Award Dec. 12,2019(IEICE HCS)
- Victory of OGIS-RI Software Challenge Award Nov. 13,2018(OGIS Lab. Inc. (2018 software contest))
- Student Encouragement Award Mar. 10,2021(HAI Symposium 2021 (Japanese Domestic Conference))
- Best Poster Award Nov. 2021(HAI 2021)
- Best Paper Award Nominee Sep. 2021(IVA2021)
- Dec. 2021
- Dec. 2021
- Dec. 2021
- Encouraging award Nov. 2021
- The 2nd place award Nov. 2021
Academic Associations
所属学会・団体名 | 役職名 (役職在任期間) |
---|---|
IPSJ | 20120401() |
Human Interface Society of Japan | 20120401() |
Virtual Reality Society Japan | |
Acoustic Society Japan | |
IEICE | 20120401(2017/) |
Intellectual Property Rights
- (Published)
- application number:2001-228868
- (Acquired)
- application number:3566646
- (Published)
- application number:2003-157100
- (Acquired)
- application number:3866171
- (Acquired)
- application number:3970193
- (Acquired)
- application number:3981640
- (Acquired)
- application number:4720974
- (Acquired)
- application number:4677543
- (Acquired)
- application number:4831750
- (Acquired)
- application number:5103682
- (Acquired)
- application number:5007405
- (Published)
- application number:2009-199512
- (Acquired)
- application number:5092093
- (Published)
- application number:2009-244949
- (Acquired)
- application number:2010-122369
- application number:5366043
- (Acquired)
- application number:2011-095902
- application number:5649809
- (Acquired)
- application number:2011-097531
- application number:5407069
- (Acquired)
- application number:2011-115936
- application number:5688574
- (Published)
- application number:2011-237865
- (Published)
- application number:2012-010856
- (Published)
- application number:2012-078913
- (Published)
- application number:2012-098111
Joint Projects/Commissioned Projects
2012 - 2012 From contract research companies
Research Publications
No. | Type of publication | Date of publication (Date of presentation) | Title | Type of research result | Jointly authored or single authored | Publisher and journal name | Volume number |
---|---|---|---|---|---|---|---|
1 | Papers1 | 2022~2022,00,00,,, | ユーザに対するロボットの生理的働きかけによるコンテンツ覚醒度の増幅と親近感への影響 | Academic Journal | Journal of Japan Society for Fuzzy Theory and Intelligent Informatics | Vol. 34, No. 3 pp. 579-591, | |
2 | International academic conference8 | 2021/12/9~2021/12/92021,12,09,2021,12,09 | Water-Human-Computer-Interface (WaterHCI):Crossing the Borders of Computation, Clothes, Skin, and Surface | Other | Co-author | 23rd annual WaterHCI DECONference 2021 | |
3 | International academic conference8 | 2021/11/9~2021/11/112021,11,09,2021,11,11 | Agent's Internal State Expression Related to Desire and Suppress Based on Behavior and Physiological Expression | Other | Co-author | HAI2021 | pp. 417–422 |
4 | International academic conference8 | 2021/11/9~2021/11/112021,11,09,2021,11,11 | Toward internal-state-based Parameterized Model of Robot’s Touching Manners based on Subjective Evaluation | Other | Co-author | HAI 2021 | pp. 438–442 |
5 | Papers1 | 2021/11~2021,11,00,,, | Quantitative effects on multiple involuntary physiologic expressions that convey the fear of robots | Academic Journal | Co-author | Journal of Japan Society for Fuzzy Theory and Intelligent Informatics | Vol.33, No.4, pp.501--515,, |
6 | International academic conference8 | 2021/10/3~2021/10/72021,10,03,2021,10,07 | Community Interaction Optimization on Twitter for people with Mood Disorders | Other | Co-author | SOTICS 2021 (The Eleventh International Conference on Social Media Technologies, Communication, and Informatics) | ArticleNo. sotics_2021_1_10_60015, ISSN: 2326-9294, ISBN: 978-1-61208-899-0, pp.1-6, |
7 | International academic conference8 | 2021/9/14~2021/9/172021,09,14,2021,09,17 | Attention-Guidance Method Based on Conforming Behavior of Multiple Virtual Agents for Pedestrians | Other | Co-author | IVA 2021 | to appear |
8 | International academic conference8 | 2021/7/24~2021/7/292021,07,24,2021,07,29 | Optimal community-generation methods for acquiring extensive knowledge on Twitter | Other | Co-author | HCII 2021 | Social Computing and Social Media: Experience Design and Social Network Analysis, pp. 105-120 |
9 | International academic conference8 | 2021/7/24~2021/7/292021,07,24,2021,07,29 | Elderly sleep support agent using physical contact presence by visual and tactile presentation | Other | Co-author | HCII 2021 | Human Aspects of IT for the Aged Population. Supporting Everyday Life Activities, pp.348-362 |
10 | Papers1 | 2021/5~2021,05,00,,, | The Effect of Interactive News Reading Method for A Newscaster Agent on Trust and Closeness | Academic Journal | Co-author | HISJ Journal | Vol.23 No.2, pp.165-176 |
11 | International academic conference8 | 2020/12/2~2020/12/42020,12,02,2020,12,04 | AR avatar separated from lecturer for individual communication in one-to-many communication | Other | Co-author | ICAT-EGVE 2020 | DOI: 10.2312/egve.20201276, pp. 15-16, |
12 | International academic conference8 | 2020/10~2020,10,00,,, | Stimulation of Learning Motivation by Multiple Agents in Group Training | Other | Co-author | HAI 2020 | DOI: https://doi.org/10.1145/3406499.3415069, pp.25--31 |
13 | Papers1 | 2020/8~2020,08,00,,, | Instinctive expressions through involuntary representation on robot's haptic skin | Academic Journal | Co-author | HISJ Journal | Vol.22, No.3, p235-250 |
14 | International academic conference8 | 2020/7~2020,07,00,,, | Partner Agent Showing Continuous and Preceding Daily Activities for Users Behavior Modification | Other | Co-author | HCII 2020 | |
15 | International academic conference8 | 2020/7~2020,07,00,,, | Basic Study of Wall-projected Humanitude Agent for Pre-care Multimodal Interaction | Other | Co-author | HCII 2020 | |
16 | International academic conference8 | 2020/7~2020,07,00,,, | Analysis of Effects on Postural Stability by Wearable Tactile Expression Mechanism | Other | Co-author | HCII 2020 | |
17 | Papers1 | 2020~2020,00,00,,, | Effectiveness of acoustic AR-TA agent using localized footsteps corresponding to audience members' attitudes | Academic Journal | Co-author | Int. J. of Simulation and Process Modeling. Inderscience. | Special Issue on “Virtual and Augmented Reality in Industry & Logistics,” Vol.15 No.6 |
18 | Commentary9 | 2019/11~2019,11,00,,, | Anthropomorphic Intermediation of Robots and Agents for Communication Assistance | Academic Journal | Single-Author | ||
19 | International academic conference8 | 2019/10~2019,10,00,,, | Emotional Gripping Expression of a Robotic Hand as Physical Contact | Other | Co-author | HAI 2019 | pp. 37–42, DOI: https://doi.org/10.1145/3349537.3351884 |
20 | International academic conference8 | 2019/10~2019,10,00,,, | Agent's Internal State Expression by Combining Its Desiring Behaviors and Heartbeat | Other | Co-author | HAI2019 poster | pp. 226–228, DOI: https://doi.org/10.1145/3349537.3352773 |
21 | Academic presentation7 | 2019/8~2019,08,00,,, | Effect of The Mult-point Vibtation and Position to The Sole and Instep of Foot on Vibration Perception. | Other | Co-author | IEICE MVE Tech meeting | vol. 119, no. 190, MVE2019-19, pp. 73-78, |
22 | International academic conference8 | 2019/3/23~2019,03,23,,, | Preliminary Experiment on Shareable and Portable Voice Sticky using Sound Orientation | Other | Co-author | IEEE VR NeuroVirt WS | DOI: 10.1109/VR.2019.8798302 |
23 | International academic conference8 | 2019/3/23~2019,03,23,,, | Japanese Tea Ceremony Experience with Multimodal AR Expressing Mental Concentration | Other | Co-author | IEEE VR NeuroVirt WS | DOI: 10.1109/VR.2019.8797964 |
24 | Academic presentation7 | 2019/1/26~2019/1/262019,01,26,2019,01,26 | Effectiveness of switching target of speech using two kind microphones in one-to-many communication | Other | Co-author | IEICE ET Tech meeting | vol. 118, no. 427, ET2018-78, pp. 7-12, |
25 | Academic presentation7 | 2019/1/17~2019/1/182019,01,17,2019,01,18 | AR shadow-clone agent of lecturer for promoting individual communication | Other | Co-author | IEICE MVE Tech meeting | PRMU2018-105, pp. 107 - 115 |
26 | Academic presentation7 | 2019/1/17~2019/1/182019,01,17,2019,01,18 | Supporting co-eating communication with switching desk-around AR environment using translucent partitions | Other | Co-author | IEICE MVE Tech meeting | PRMU2018-104, pp. 101 - 107 |
27 | International academic conference8 | 2018/12/15~2018/12/182018,12,15,2018,12,18 | Analyses of Textile Pressure-map Sensor Data of a Stuffed Toy for Understanding Human Emotional Physical Contact | Other | Co-author | Human Agent Interaction 2018 | pp.191--198 |
28 | International academic conference8 | 2018/12/15~2018/12/182018,12,15,2018,12,18 | Attracting Attention and Changing Behavior toward Wall Advertisements with a Walking Virtual Agent | Other | Co-author | Human Agent Interaction 2018 | pp.61--66 |
29 | International academic conference8 | 2018/12/15~2018/12/182018,12,15,2018,12,18 | Arousal and Valence in Robot's Emotional Expression of Breathing and Heartbeat | Other | Co-author | Human Agent Interaction 2018 (HAI2018) | pp.330--332, |
30 | International academic conference8 | 2018/12/15~2018/12/182018,12,15,2018,12,18 | Internal Flow Model and Behavioral Design for an Artificial Agent's Ownership-desire Model | Other | Co-author | Human Agent Interaction 2018 (HAI2018) | pp.362--364, |
31 | International academic conference8 | 2018/12/15~2018/12/182018,12,15,2018,12,18 | Preliminary Examination of Walking Stability Improvement by Wearable Tactile Expression Mechanism | Other | Co-author | Human Agent Interaction 2018 (HAI2018) | pp.350--352, |
32 | International academic conference8 | 2018/12/5~2018/12/82018,12,05,2018,12,08 | Verification of Discussion-Stimulating System for Online Creative Meetings using Key Phrases and Mind Maps | Other | Co-author | SCIS-ISIS 2018 | pp. 967--974 |
33 | International academic conference8 | 2018/10/22~2018/10/242018,10,22,2018,10,24 | Switching Target of Speech between Whole and Particular Audiences using Face Direction and Two Microphones | Other | Co-author | UAC2018 (International Symposium on Universal Acoustical Communication 2018) | poster 2-26 (2 pages) |
34 | International academic conference8 | 2018/9/17~2018/9/212018,09,17,2018,09,21 | Acoustic AR-TA Agent using Footsteps in Corresponding to\Audience Members' Participating Attitudes | Other | Co-author | VARE2018 | pp. 113-122 |
35 | Academic presentation7 | 2018/8/26~2018/8/272018,08,26,2018,08,27 | Preliminary Design of Internal and Acting Models of Agent’s Ownership Desire | Other | Co-author | IEICE HCS | HCS2018-32, pp.1--6, |
36 | Academic presentation7 | 2018/8/26~2018/8/272018,08,26,2018,08,27 | Arousal and Valence in Robot's Emotional Expression by Artificial Physiological Phenomena | Other | Co-author | IEICE HCS | HCS2018-33, pp.7--12, |
37 | Commentary9 | 2018/7~2018,07,00,,, | Part1, V. Human-agent Interaction | Other | Co-authored chapter | NDL | Perspectives on Artificial Intelligence/Robotics and Work/Employment |
38 | Papers1 | 2018/6~2018,06,00,,, | Lecturer’s Understanding in Large Classroom by Overlapped Color Map Based on Estimation of Audience’s Attitudes | Academic Journal | Co-author | IEICE Japanese Journal (Society D), | Vol.J101-D,No.6,pp.944-957 |
39 | Papers1 | 2018/6~2018,06,00,,, | AR Projection System Enhancing Visual Guidance Effect of the Pointing Gesture of Audience in Lecture | Academic Journal | Co-author | IEICE Japanese Journal (Society D), | Vol.J101-D,No.6,pp.932-943 |
40 | Academic presentation7 | 2018/3/27~2018,03,27,,, | Virtual Convex Segment by Vibration Actuator Array Revised by Different Sensitivities in a Sole | Other | Co-author | 153rd HI Tech-meeting (SIG-ACI-21) | SIG-ACI-21, pp.43--50, |
41 | Academic presentation7 | 2018/3/27~2018,03,27,,, | Preceding movement of humanitude virtual agent into user's FOV before conversation supporting dementia elderly people | Other | Co-author | 153rd HI Tech-meeting (SIG-ACI-21) | SIG-ACI-21, pp.17--22, |
42 | Academic presentation7 | 2018/3/27~2018,03,27,,, | AR Shadow-clone agent system for speaker in one-to-many communication | Other | Co-author | 153rd HI Tech-meeting (SIG-ACI-21) | SIG-ACI-21, pp.37--42, |
43 | Academic presentation7 | 2018/3/27~2018,03,27,,, | Switching multiple desks environments using semitransparent partition screen and sound localization | Other | Co-author | 153rd HI Tech-meeting (SIG-ACI-21) | SIG-ACI-21, pp.23--28, |
44 | Commentary9 | 2018/3~2018,03,00,,, | Part1, V. Human-agent Interaction | Other | Co-authored chapter | NDL | Perspectives on Artificial Intelligence/Robotics and Work/Employment |
45 | International academic conference8 | 2017/12/12~2017/12/142017,12,12,2017,12,14 | Enhancing pointing gestures using an automatic projection system | Other | Co-author | ACIS2017 | pp. 161-164 |
46 | International academic conference8 | 2017/12/12~2017/12/142017,12,12,2017,12,14 | Lecture support system for understanding an audience's attitudes using optical flow and overlapped color mapping | Other | Co-author | ACIS2017 | pp. 145-148 |
47 | Academic presentation7 | 2017/12/2~2017,12,02,,, | Analyses of Creative Discussion Stimulating System using Key Phrases and Mind-map based on Online E-Papers | Other | Co-author | IEICE ET Tech meeting | Vol.117, No.335, ET2017-72, pp 21--26, |
48 | Academic presentation7 | 2017/12/2~2017,12,02,,, | Effects of Footstep AR Agent as TA for Each Different Individuals in Multiple Audiences | Other | Co-author | IEICE ET Tech meeting | Vol.117, No.335, ET2017-73, pp 27--32, |
49 | International academic conference8 | 2017/10/17~2017/10/202017,10,17,2017,10,20 | Indirect control of user's e-learning motivation by controlling activity ratio of multiple agents | Other | Co-author | HAI2017 | pp 27--34, |
50 | International academic conference8 | 2017/10/17~2017/10/202017,10,17,2017,10,20 | Physiological Expression of Robots Enhancing Users' Emotion in Direct and Indirect Communication | Other | Co-author | HAI2017 poster | pp 505--509, |
51 | Academic presentation7 | 2017/8/20~2017/8/212017,08,20,2017,08,21 | Pressure map data analyses of textile-type sensor for classification of physical contact pattern on stuffed toy | Other | Co-author | IEICE VNV-HCS | vol.2017-HCS-117, vol. 117, no. 177, HCS2017-53, pp. 35--40, |
52 | Lecture19 | 2017/8/8~2018/8/92017,08,08,2018,08,09 | Voisticky: Sharable and Portable Auditory Balloon with Voice Sticky Posted and Browsed by User's Head Direction | Other | Co-author | The 1st International Scientific Conference on Hospitality and its Applications (ISCHA) | |
53 | International academic conference8 | 2017/7/9~2017/7/142017,07,09,2017,07,14 | Haptic interaction design for physical contact between a wearable robot and the user | Other | Co-author | HCII2017, Springer International Publishing Switzerland | pp.476--490 |
54 | International academic conference8 | 2017/7/9~2017/7/142017,07,09,2017,07,14 | A tactile expression mechanism using pneumatic actuator array for noti cation from wearable robots | Other | Co-author | HCII2017, Springer International Publishing Switzerland | pp.466--475 |
55 | International academic conference8 | 2017/6/28~2017/7/12017,06,28,2017,07,01 | Estimating Emotion of User via Communicative Stuffed-toy Device with Pressure Sensors Using Fuzzy Reasoning | Other | Co-author | URAI2017 | P2-79 |
56 | Academic presentation7 | 2016/12/12~2016,12,12,,, | Emotion Estimation Using Pressure Sensors in Communicative Stuffed-toy Device with Fuzzy Reasoning | Other | Co-author | IPSJ MPS | 2016-MPS-111, no. 6, pp.1--6 |
57 | International academic conference8 | 2016/12/5~2016/12/82016,12,05,2016,12,08 | Groveling on the Wall: Interactive VR Attraction using Gravity Illusion SIGGRAPH ASIA 2016 poster (2 pages) 2016 | Other | Co-author | SIGGRAPH ASIA 2016 | poster (2pages) |
58 | International academic conference8 | 2016/12/5~2016/12/82016,12,05,2016,12,08 | Virtual Ski Jump: illusion of slide down the slope and gliding | Other | Co-author | SIGGRAPH ASIA 2016 | poster (2pages) |
59 | International academic conference8 | 2016/10/7~2016,10,07,,, | Integrating auditory space for multiple people in real world using their personal devices | Other | Co-author | UV 2016 | II-2-5, 5 pages |
60 | International academic conference8 | 2016/10/4~2016,10,04,,, | Stepwise Experience Design of Tactile Interaction in Children's Enrobotment | Other | Single-Author | HAI 2016 WS | Enrobotment WS, 3 pages |
61 | International academic conference8 | 2016/10~2016,10,00,,, | Evaluation of Schedule Managing Agent among Multiple Members with Representation of Background Negotiations | Other | Co-author | HAI 2016 | pp.305--313 |
62 | International academic conference8 | 2016/10~2016,10,00,,, | Investigating Breathing Expression of a Stuffed-Toy Robot Based on Body-Emotion Model | Other | Co-author | HAI 2016 | pp.139--145 |
63 | International academic conference8 | 2016/9/28~2016/9/302016,09,28,2016,09,30 | Accelerating Physical Experience of Immersive and Penetrating Music by Vibration-motor Array in a Wearable Belt Set | Other | Co-author | IFIP ICEC 2016 | Springer LNCS 9926, pp.173--187 |
64 | International academic conference8 | 2016/9/6~2016,09,06,,, | Seamless Change of Modality Volume in Observation of Elderly Daily Lives | Other | Co-author | ICServn2016 | pp.73--80 |
65 | Academic presentation7 | 2016/3/5~2016,03,05,,, | Supporting speaker's understanding using color-overlapped image based on estimation of audience participation | Other | Co-author | IEICE Tech. Meeting (ET) | ET2015--117, pp.129--136 |
66 | Academic presentation7 | 2016/1/28~2016/1/292016,01,28,2016,01,29 | Auditory localization in closed space by synchronization algorithm of multiple portable devices | Other | Co-author | IEICE Tech Meeting (EA) | vol.115, no.424, EA2015-58, pp. 19--26, |
67 | International academic conference8 | 2015/12/3~2015/12/52015,12,03,2015,12,05 | Wearable robot that measures user vital signs for elderly care and support | Other | Co-author | 9th EAI International Conference on Bio-inspired Information and Communications Technologies | Pages 53-57 |
68 | International academic conference8 | 2015/12/3~2015/12/52015,12,03,2015,12,05 | Design of Pet Robots with Limitations of Lives and Inherited Characteristics | Other | Co-author | 9th EAI International Conference on Bio-inspired Information and Communications Technologies | Pages 69-73 |
69 | International academic conference8 | 2015/12/3~2015/12/52015,12,03,2015,12,05 | Breathing Expression for Intimate Communication Corresponding to the Physical Distance and Contact between Human and Robot | Other | Co-author | 9th EAI International Conference on Bio-inspired Information and Communications Technologies | Pages 65-69 |
70 | International academic conference8 | 2015/10/28~2015/10/302015,10,28,2015,10,30 | Evaluations of Involuntary Crossmodel Expressions on the Skin of a Communication Robot | Other | Co-author | Ubiquitous Robots and Ambient Intelligence 2015 | TC4-4, pp. 347--352 |
71 | International academic conference8 | 2015/10/28~2015/10/302015,10,28,2015,10,30 | Direction indication mechanism by pulling user's cloth for wearable message robot | Other | Co-author | ICAT-EGVE 2015 | P2 (4 pages) |
72 | International academic conference8 | 2015/10/21~2015/10/242015,10,21,2015,10,24 | Spatial Communication and Recognition in Human-agent Interaction using the Motion Parallax-based 3DCG Virtual Agent | Other | Co-author | Human Agent Interaction 2015 | pp.97--103 |
73 | International academic conference8 | 2015/9/2~2015,09,02,,, | Wearable robot with vital sensors for elderly care and support | Other | Co-author | ROMAN 2015 Interactive Session | IS-12 |
74 | International academic conference8 | 2015/9/1~2015,09,01,,, | Crossmodal Combination among Verbal, Facial, and Flexion Expression for Anthropomorphic Acceptability | Other | Co-author | ROMAN 2015 | pp.549-554 |
75 | Academic presentation7 | 2015/8/21~2015,08,21,,, | Support for Building User's Mindmap by Conversational Agent Moving Around Nodes | Other | Co-author | IEICE Tech Meeting HCS | pp.1--5 |
76 | Academic presentation7 | 2015/8/21~2015,08,21,,, | Design of Virtual Agent Estimating Multiple Persons' Possession of Objects | Other | Co-author | IEICE Tech Meeting HCS | pp.7-12 |
77 | Academic presentation7 | 2015/8/21~2015,08,21,,, | Management and negotiation agent showing nonverbal behaviors among other members' presence | Other | Co-author | IEICE Tech Meeting HCS | pp.13-18 |
78 | International academic conference8 | 2015/8/5~2015,08,05,,, | Indirect Monitoring of Cared Person by Onomatopoeic Text of Environmental Sound and User's Physical State | Other | Co-author | HCII2015, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.): | DAPI 2015, LNCS 9189, pp.506-517 |
79 | International academic conference8 | 2015/8/5~2015,08,05,,, | Auditory browsing interface of ambient and parallel sound expression for supporting one-to-many communication | Other | Single-Author | HCII2015, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.): | DAPI 2015, LNCS 9189, pp.224-236 |
80 | International academic conference8 | 2015/7/28~2015,07,28,,, | Evaluating Elements of Communicative Stuffed-toy Device Describes Scripts on SNS | Other | Co-author | PDPTA 2015 | pp.310-316 |
81 | Papers1 | 2015/5~2015,05,00,,, | Investigation of Embedded Text Communication with Onomatopoeia of User's Bodily Motion and Environmental Sounds | Academic Journal | Co-author | HISJ Journal | vol.17, no.2, pp 97--106 |
82 | Papers1 | 2015/1~2015,01,00,,, | Effectiveness of Ownership Expression for Real-world Objects by Facial Expression of Virtual Agent | Academic Journal | Co-author | IPSJ Journal | vol.56 no.1, pp 411--419 |
83 | International academic conference8 | 2014/12/3~2014,12,03,,, | Real-Time 3D Data Reduction and Reproduction of Spatial Model using Line Detection in RGB Image | Other | Co-author | SCIS-ISIS 2014 | pp.727-730 |
84 | International academic conference8 | 2014/12/3~2014,12,03,,, | Interactive Browsing Agent for Novice User with Selective Information in Dialog | Other | Co-author | SCIS-ISIS 2014 | pp.731-734 |
85 | International academic conference8 | 2014/12/3~2014,12,03,,, | Shedule Managing Agent among Group Members with Caring Expressions | Other | Co-author | SCIS-ISIS 2014 | pp.1564-1567 |
86 | International academic conference8 | 2014/12/3~2014,12,03,,, | Automatic Acquirement of Toilet map using Wearable Camera | Other | Co-author | SCIS-ISIS 2014 | pp.1568-1571 |
87 | International academic conference8 | 2014/12/3~2014,12,03,,, | An Interactive Stuffed-toy Device for Communicative Description on Twitter | Other | Co-author | SCIS-ISIS 2014 | pp.1361-1363 |
88 | International academic conference8 | 2014/11/11~2014,11,11,,, | Synchronized AR Environment for Multiple Users Using Animation Markers | Other | Co-author | VRST2014 | pp.237-238 |
89 | International academic conference8 | 2014/10/29~2014,10,29,,, | Personal and Interactive Newscaster Agent based on Estimation of User's Understanding | Other | Co-author | HAI2014 | pp.45--50 |
90 | International academic conference8 | 2014/10/29~2014,10,29,,, | Simplification of Wearable Message Robot with Physical Contact for Elderly's Outing Support | Other | Co-author | HAI2014 | pp.35--38 |
91 | Papers1 | 2014/10~2014,10,00,,, | Proposal and Evaluation of Toilet Timing Suggestion Methods for the Elderly | Academic Journal | Co-author | International Journal of Advanced Computer Science and Applications | Volume 5 Issue 10, pp.140--145 |
92 | International academic conference8 | 2014/6/25~2014,06,25,,, | A Structure of Wearable Message-robot for Ubiquitous and Pervasive Services | Other | Co-author | HCII2014, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.): | DAPI 2014, LNCS 8530, pp.400--411 |
93 | International academic conference8 | 2014/3/4~2014,03,04,,, | Involuntary Expression of Embodied Robot Adopting Goose Bumps | Other | Co-author | HRI 2014 | pp.254--255 |
94 | International academic conference8 | 2014/3/4~2014,03,04,,, | Breatter: A Simulation of Living Presence with Breath that Corresponds to Utterances | Other | Co-author | HRI 2014 | pp.256--257 |
95 | International academic conference8 | 2013/11/8~2013,11,08,,, | Mixticky: a virtual multimedia sticky recordable/browsable\ around user using smart phone | Other | Co-author | ACPR2013 | pp.637--641 |
96 | International academic conference8 | 2013/11~2013,11,00,,, | Physical Contact using Haptic and Gestural Expressions for Ubiquitous Partner Robot | Other | Co-author | IROS2013 | pp.5680-5685 |
97 | International academic conference8 | 2013/9~2013,09,00,,, | Wearable partner agent with anthropomorphic physical contact with awareness of clothing and posture | Other | Co-author | ISWC2013 | pp.77--80 |
98 | International academic conference8 | 2013/8~2013,08,00,,, | Abotar: An Expressive Method of Web Communication using Appearances of Avatars Attached to Text Messages and Remarks | Other | Co-author | iHAI2013 | II-p6 |
99 | International academic conference8 | 2013/8~2013,08,00,,, | SCoViA: Effectiveness of spatial communicative virtual agent based on motion parallax | Other | Co-author | iHAI2013 | II-p7 |
100 | International academic conference8 | 2013/8~2013,08,00,,, | Investigation of Object-indicating Behaviors -Between Spacial Difficulty and Robot’s Degree of Freedom- | Other | Co-author | iHAI2013 | II-p4 |
101 | International academic conference8 | 2013/8~2013,08,00,,, | Appearance and Physical Presence of Anthropomorphic Media in Parallel with Non-face-to-face Communication | Other | Co-author | iHAI2013 | III-I-3 |
102 | International academic conference8 | 2013/8~2013,08,00,,, | Visual language communication system with multiple pictograms converted from weblog texts for authoring and browsing dance motion and formation | Other | Co-author | IASDR2013 | 13A-3 |
103 | International academic conference8 | 2013/8~2013,08,00,,, | Choreographic design visualization of enormous dancers for authoring and browsing dance motion and formation | Other | Co-author | IASDR2013 | 05E-3 |
104 | International academic conference8 | 2013/3~2013,03,00,,, | Attitude-aware communication behaviors of a partner robot: politeness for the master | Other | Co-author | HRI2013 demo | D19 |
105 | International academic conference8 | 2013/3~2013,03,00,,, | Ikitomical Model: extended body sensation through a cardiovascular robot | Other | Co-author | HRI2013 demo | D20 |
106 | Papers1 | 2012~2012,00,00,,, | Anthropomorphic awareness of partner robot to user's situation based on gaze and speech detection | Academic Journal | Co-author | International Journal of Autonomous and Adaptive Communications Systems Vol. 5, No. 1, 2012. | Vol. 5, No. 1, pp 18-38 |
107 | International academic conference8 | 2012~2012,00,00,,, | Real-Time Polygon Reconstruction for Digital archives of Cultural Properties | Other | Co-author | JSST2012 | OS9-12 (7 pages) |
108 | International academic conference8 | 2012~2012,00,00,,, | Manipulation of a VR object using user's pre-motion | Other | Co-author | JSST2012 | OS9-13 (7 pages) |
109 | International academic conference8 | 2012~2012,00,00,,, | AR based Spatial Reasoning Capacity Training for Students | Other | Co-author | PDPTA2012 | Vol.II, pp.751-757 |
110 | International academic conference8 | 2012~2012,00,00,,, | Proposal and Evaluation of the Toilet Timing Suggestion Method for the Elderly | Other | Co-author | ICCI*CC 2012 | pp. 178-185 |
111 | International academic conference8 | 2011/9/18~2011,09,18,,, | Estimation of User Conversational States based on Combination of User Actions and Feature Normalization | Other | Co-author | ACM the 6th Workshop of CASEMANS | pp.33-37 |
112 | International academic conference8 | 2011/9/18~2011,09,18,,, | Privacy Protected Life-context-aware Alert by Simplified Sound Spectrogram from Microphone Sensor | Other | Co-author | ACM the 6th Workshop of CASEMANS | pp.4-9 |
113 | International academic conference8 | 2011/9/14~2011/9/162011,09,14,2011,09,16 | Voisticky: Sharable and Portable Auditory Balloon with Voice Sticky Posted and Browsed by User's Head | Other | Co-author | IEEE ICSPCC 2011 | pp. 118-123 |
114 | Lecture19 | 2011/9~2011,09,00,,, | Ubiquitous generation: change of education for informatics | Other | Co-author | ||
115 | Papers1 | 2011/8~2011,08,00,,, | Assisting video communication by an intermediating robot system corresponding to each user's attitude | Academic Journal | Co-author | Human Interface Society Journal | Vol.3 No.3 |
116 | Papers1 | 2011~2011,00,00,,, | Automatic calibration of 3D eye model for single-camera based gaze estimation | Academic Journal | Co-author | IEICE Japanese Journal (Society D), | Vol.J94-D, No.6, pp.998-1006 |
117 | International academic conference8 | 2010/10/18~2010,10,18,,, | Improving Video Communication for Elderly and Disabled by Coordination of Robot's Active Listening Behaviors and Media Controls | Other | Co-author | IEEE IROS 2010 | pp.1476-1481 |
118 | International academic conference8 | 2010/9/26~2010,09,26,,, | Conversational Attitude-aware Behavioral Design for Robot Assistant Combined with Video Communication | Other | Co-author | The 5th ACM Workshop of CASEMANS | pp. 1-8 |
119 | Papers1 | 2009~2009,00,00,,, | Verification of Behavioral Designs for Gaze-communicative Stuffed-toy Robot | Academic Journal | Co-author | IEICE Japanese Journal (Society D), | Vol.J92-D, No.1, pp.81-92 |
120 | International academic conference8 | 2009~2009,00,00,,, | Portable Recording/Browsing System of Voice Memos Allocated to User-relative Directions | Other | Co-author | Pervasive 2009 Adjunct Proceedings | pp.241-244 |
121 | International academic conference8 | 2009~2009,00,00,,, | Evaluating Crossmodal Awareness of Daily-partner Robot to User's Behaviors with Gaze and Utterance Detection | Other | Co-author | CASEMANS2009 | pp.1-8 |
122 | International academic conference8 | 2008~2008,00,00,,, | Intuitive Page-turning Interface of E-books on Flexible E-paper based on User Studies | Other | Co-author | ACM Multimedia2008 | pp.793-796 |
123 | International academic conference8 | 2008~2008,00,00,,, | GazeRoboard: Gaze-communicative Guide System in Daily Life on Stuffed-toy Robot with Interactive Display Board | Other | Co-author | IEEE IROS2008 | pp.1204-1209 |
124 | International academic conference8 | 2008~2008,00,00,,, | Evaluations of Interactive Guideboard with Gaze-communicative Stuffed-toy Robot | Other | Co-author | COGAIN2008 | pp. 53-58 |
125 | International academic conference8 | 2008~2008,00,00,,, | Sheaf on Sheet: A concept of tangible interface for browsing on a flexible e-paper | Other | Co-author | SIGGRAPH2008 | Poster, B134 |
126 | International academic conference8 | 2008~2008,00,00,,, | Remote and Head-Motion-Free Gaze Tracking for Real Environments with Automated Head-Eye Model Calibrations | Other | Co-author | IEEE CVPR2008 | ID 235 |
127 | International academic conference8 | 2008~2008,00,00,,, | Remote Gaze Estimation with a Single Camera Based on Facial-Feature Tracking without Special Calibration Actions | Other | Co-author | ETRA2008 | pp.245-250 |
128 | Papers1 | 2007/10/1~2007,10,01,,, | Perceptual Continuity and Naturalness of Expressive Strength in Singing Voice based on Speech Morphing | Academic Journal | Co-author | EURASIP Journal on Audio, Speech and Music Processing | Vol. 2007, Article ID 23807 (9 pages) |
129 | International academic conference8 | 2007~2007,00,00,,, | Gaze-communicative Behavior of Stuffed-toy Robot with Joint Attention and Eye Contact based on Ambient Gaze-tracking | Other | Co-author | ACM ICMI2007 | pp. 140-145 |
130 | International academic conference8 | 2007~2007,00,00,,, | Gazecoppet: Hierarchical Gaze-communication in Ambient Space | Other | Co-author | ACM SIGGRAPH2007 | Poster J06 |
131 | Papers1 | 2006/8/25~2006,08,25,,, | Cross-modality of Expressive Strength in Gestural and Vocal Expression with Personification | Academic Journal | Co-author | Human Interface Society Journal | Vol. 8, No. 3, pp. 43-52 |
132 | Papers1 | 2006/3/1~2006,03,01,,, | Continuous transformation of the singing voice expressions controlled by hand-puppet gesture | Academic Journal | Co-author | Journal of Acoustical Society of Japan | Vol.62, No.3, pp. 233-243 |
133 | International academic conference8 | 2006~2006,00,00,,, | Crossmodal Coordination of Expressive Strength between Voice and Gesture for Personified Media | Other | Co-author | ACM ICMI2006 | pp.43-50 |
134 | International academic conference8 | 2005~2005,00,00,,, | Gradually Changing Expression of Singing Voice based on Morphing | Other | Co-author | Interspeech2005 | pp.541-544 |
135 | International academic conference8 | 2005~2005,00,00,,, | HandySinger: Expressive Singing Voice Morphing using Personified Handpuppet Interface | Other | Co-author | New Interfaces for Musical Interface 2005 | pp.121-126 |
136 | Papers1 | 2002/8/15~2002,08,15,,, | Tactile Sensor-doll Interaction with Context-aware Music Expressions | Academic Journal | Co-author | Journal of Information Processing Society of Japan | Vol.43, No.8, pp.2810-2820 |
137 | International academic conference8 | 2002~2002,00,00,,, | Musically Expressive Doll in Face-to-face Communication | Other | Co-author | IEEE ICMI2002 | pp.417-422 |
138 | International academic conference8 | 2002~2002,00,00,,, | Awareness Communications by Entertaining Toy Doll Agents | Other | Co-author | International Workshop on Entertainment Computing 2002 | pp.326-333 |
139 | International academic conference8 | 2001/4~2001,04,00,,, | Body, Clothes, Water, and Toys - Media Towards Natural Music Expressions with Digital Sounds - | Other | Co-author | CHI2001 Workshop on New Interface for Musical Expression | |
140 | International academic conference8 | 2001/4~2001,04,00,,, | Context-aware Sensor-doll as a Music Expression Device | Other | Co-author | ACM SIGCHI2001 | pp.307-308 |
141 | International academic conference8 | 2000/8~2000,08,00,,, | Tangible Sound: Musical Instrument Using Tangible Fluid Media | Other | Co-author | ICMC2000 | pp.551-554 |
142 | Papers1 | 2000/3~2000,03,00,,, | Interaction of Musical Instruments Using Fluid | Academic Journal | Co-author | Transaction of Virtual Reality Society in Japan | Vol. 5, No. 1, pp. 755-762 |
CommentaryOtherTomoko Yonezawato appear2022/11~
Papersユーザに対するロボットの生理的働きかけによるコンテンツ覚醒度の増幅と親近感への影響Academic JournalNaoto Yoshida;Tomoko YonezawaJournal of Japan Society for Fuzzy Theory and Intelligent InformaticsVol. 34, No. 3 pp. 579-591,2022~
Academic presentationOtherKotaro Hazeki;Tomoko YonezawaIEICE MVE Tech meetingvol.121, no.349, MVE2021-37, pp.43-48,2022~
Academic presentationOtherMako Ishida;Hibiki Takemura;Tomoko YonezawaIEICE MVE Tech meetingvol.121, no.349, MVE2021-31, pp.7-12,2022~
Academic presentationOtherNaoki Matsumura;Tomoko YonezawaIEICE MVE Tech meetingvol.121, no.349, MVE2021-33, pp.19-24,2022~
Academic presentationOtherHibiki Takemura;Mako Ishida;Tomoko YonezawaIEICE MVE Tech meetingvol.121, no.349, MVE2021-32, pp.13-18,2022~
Academic presentationOtherCong Wang;Tomoko YonezawaIEICE MVE Tech meetingvol.121, no.349, MVE2021-35, pp.31-36,2022~
Academic presentationOtherHiroto Murakami;Naoto Yoshida;Tomoko Yonezawa;Yu EnokiboriHAI Symposium 2022ID: P-20 (5 pages),2022~
Academic presentationOtherYaze Zhang;Xin Wan;Tomoko YonezawaHAI Symposium 2023ID: G-5 (8 pages),2022~
Academic presentationOtherTomoko Yonezawa;Yuichi Okada;Satomi ShimizuHI tech. meetingVol. 24, No. 1, pp. 87-92,2022~
Academic presentationOtherKodai Matsuyama;Yaze Zhang;Tomoko YonezawaHI tech. meetingVol. 24, No. 1, pp.59-68,2022~
Academic presentationOtherJunki Okada;Tomoko YonezawaHI tech. meetingVol. 24, No. 1, pp. 69-78,2022~
Academic presentationOtherSatomi Shimizu;Tomoko Yonezawa第33回大会, ID:PR0019, (研究発表-保育・教育5/測定・評価),2022~
Academic presentationOtherNaoki Kammaki;Tomoko Yonezawa信学技報, vol. 121, no. 423, MVE2021-90, pp. 266-271,2022~
Academic presentationOtherShunsuke Yoshitsugu;Ryota Mima;Keita Kobayashi;Taro Obayashi;Tomoko Yonezawa信学技報, vol. 121, no. 423, MVE2021-89, pp. 261-265,2022~
Academic presentationOtherYuto Fujii;;Naoto Yoshida;Tomoko Yonezawa;Kenji Masevol.2022-MBL-102, no.22, pp.1-8,2022~
Academic presentationOtherAkihisa Tsukamoto;Naoto Yoshida;Tomoko Yonezawa;Kenji Mase;Yu Enokiboripp. 1574-1582,2022~
Academic presentationOtherHibiki Takemura;Mako Ishida;Tomoko Yonezawa2022-MUS-134 (56), pp. 1-9,2022~
Academic presentationOtherShinji Iwata;Naoto Yoshida;Tomoko Yonezawa;Kenji Mase;Yu Enokibori信学技報, vol. MVE2022, No. 8, pp. 42-48,2022~
Academic presentationOtherRyota Mima;Taro Obayashi;Shunsuke Yoshitsugu;Ryoho Shinya;Tomoko Yonezawa信学技報, vol. MVE2022, No. 4, pp.19-24,2022~
Academic presentationOtherTaro Obayashi;Tomoya Osawa;Ryota Mima;Shunsuke Yoshitsugu;Ryoho Shinya;Tomoko Yonezawa信学技報, vol. MVE2022, No. 5, pp. 25-29,2022~
Academic presentationOtherTomoko Yonezawa;Hayato Katakawa63th JES Conference2D5-2 (2 pages),2022~
Academic presentationOtherNaoki Matsumura;Tomoko YonezawaIEICE MVE Tech meeting MVE2022-11, vol. 122, no. 175, pp. 9-14,2022~
Academic presentationOtherNaoki Kammaki;Tomoko YonezawaHIS 20221T-P3 (6 pages),2022~
Academic presentationOtherSatomi Shimizu;Tomoko YonezawaJELDto appear2022~
Academic presentationOtherYuto Fujii;Naoto Yoshida;Tomoko Yonezawa;Yu Enokibori;Kenji MaseHCG Symposium信学技報 HCGSYMPO Print edition: ISSN 0913-5685 I-2-62021/12/15~2021/12/17
Academic presentationOtherHiroto Murakami;Naoto Yoshida;Tomoko Yonezawa;Yu Enokibori;Kenji MaseHCG Symposium信学技報 HCGSYMPO Print edition: ISSN 0913-5685, A-6-3 2021/12/15~2021/12/17
Academic presentationOtherNaoto Yoshida;Hiroto Murakami;Shinji Iwata;Tomoko Yonezawa;Yu Enokibori;Kenji MaseHCG Symposium信学技報 HCGSYMPO Print edition: ISSN 0913-5685, A-5-42021/12/15~2021/12/17
International academic conferenceWater-Human-Computer-Interface (WaterHCI):Crossing the Borders of Computation, Clothes, Skin, and SurfaceUnrefereedOtherCo-authorSteve Mann;Mark Mattson;Steve Hulford;Mark Fox;Kevin Mako;Ryan Janzen;Maya Burhanpurkar;Simone Browne;Craig Travers;Robert Thurmond;Seung-min Park;Cayden Pierce;Samir Khaki;Derek Lam;Faraz Sadrzadeh-Afsharazar;Kyle Simmons;Tomoko Yonezawa;Ateeya Manzoo23rd annual WaterHCI DECONference 20212021/12/9~2021/12/923rd annual WaterHCI DECONference 2021Toronto, ON, CanadaWater-Human-Computer Interface (WaterHCI), or, more generally, Fluidic-User-Interface (i.e. including other fluids) isa relatively new concept and field of inquiry that originated inCanada, in the 1960s and 1970s, and was further developed atUniversity of Toronto 1998 to present. We provide taxonomies of the various kinds of water-human interaction, identify important past, present, and future contributions and trends in Water-HCI from around the world, and identify grand challenges of this new discipline.
International academic conferenceAgent's Internal State Expression Related to Desire and Suppress Based on Behavior and Physiological ExpressionIn refereedOtherCo-authorYOSHIDA, Naoto;YONEZAWA, TomokoHAI2021pp. 417–4222021/11/9~2021/11/11
International academic conferenceToward internal-state-based Parameterized Model of Robot’s Touching Manners based on Subjective EvaluationIn refereedOtherCo-authorYONEZAWA,Tomoko;YAMAZOE, HirotakeHAI 2021pp. 438–4422021/11/9~2021/11/11
PapersQuantitative effects on multiple involuntary physiologic expressions that convey the fear of robotsIn refereedAcademic JournalCo-authorMENG, Xiaoshun;YOSHIDA, Naoto;WAN, Xin;YONEZAWA,TomokoJournal of Japan Society for Fuzzy Theory and Intelligent InformaticsVol.33, No.4, pp.501--515,, 2021/11~Japan Society for Fuzzy Theory and Intelligent InformaticsIn this paper, we introduce our study on the cross-modal physiological expression on a robot’s skin using goosebumps, perspiration, and shiver. Human and other living beings show their voluntary and involuntary state via physiological phenomena, and the main visible/tangible phenomena appear on the skin. We especially focused on the expressive strengths and combinations of three involuntary expressions above to affect the nuances
of instinctive fear emotions. The evaluation results showed that the fear emotion of the robot, the aliveness, and other impressions of the robot can be transmitted even only by the single use of the involuntary expressions, and that might be caused by ceiling effects of each modality’s strong effectiveness. In addition, some combinations of multiple involuntary expressions, such as increased annoyance in the combination of a small amount of sweating and a large amount of goosebumps expression, showed a unique expressiveness on the factors of the fear extracted in our analyses.
Academic presentationOtherYuki Kitagishi;Hosana Kohyama;Takeshi Mori;Taichi Asami;Naohiro Tawara;Tomoko Yonezawa信学技報, vol. 121, no. 211, HIP2021-30, pp. 1-62021/10/21~2021/10/22
International academic conferenceCommunity Interaction Optimization on Twitter for people with Mood DisordersIn refereedOtherCo-authorYuichi Okada;Naoya Itoh;Tomoko YonezawaSOTICS 2021 (The Eleventh International Conference on Social Media Technologies, Communication, and Informatics)ArticleNo. sotics_2021_1_10_60015, ISSN: 2326-9294, ISBN: 978-1-61208-899-0, pp.1-6,2021/10/3~2021/10/7hybrid conference (online/Barcelona)
Academic presentationUnrefereedOtherCo-authorNaoki Kammaki;Natsumi Murakami;Tomoko Yonezawa2021/9/17~2021/9/18
Academic presentationUnrefereedOtherCo-authorNaoki Matsumura;Tomoko Yonezawa2021/9/17~2021/9/18
International academic conferenceAttention-Guidance Method Based on Conforming Behavior of Multiple Virtual Agents for PedestriansIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaIVA 2021to appear2021/9/14~2021/9/17online
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Yukinori Hamada;Tomoko Yonezawa2021/8/21~2021/8/22
International academic conferenceOptimal community-generation methods for acquiring extensive knowledge on TwitterIn refereedOtherCo-authorYuichi Okada;Naoya Itoh;Tomoko YonezawaHCII 2021Social Computing and Social Media: Experience Design and Social Network Analysis, pp. 105-1202021/7/24~2021/7/29
International academic conferenceElderly sleep support agent using physical contact presence by visual and tactile presentationIn refereedOtherCo-authorZhang Yaze;Xin Wan;Tomoko YonezawaHCII 2021 Human Aspects of IT for the Aged Population. Supporting Everyday Life Activities, pp.348-3622021/7/24~2021/7/29online
Magazine articleUnrefereedAcademic JournalSingle-AuthorTomoko YonezawaJSAI2021/7~
Academic presentationUnrefereedOtherCo-authorShinji Iwata;Naoto Yoshida;Tomoko Yonezawa;Yu Enokibori;Kenji Mase2021/6/1~2021/6/2
Academic presentationUnrefereedOtherCo-authorYuichi Okada;Kensei Oyamada;Tomoko Yonezawa2021/5/24~2021/5/25
PapersThe Effect of Interactive News Reading Method for A Newscaster Agent on Trust and ClosenessIn refereedAcademic JournalCo-authorNaoto Yoshida;Miyuki Yano;Tomoko YonezawaHISJ JournalVol.23 No.2, pp.165-176 2021/5~
Magazine articleUnrefereedAcademic JournalSingle-AuthorTomoko YonezawaJSAI2021/5~
Academic presentationUnrefereedOtherCo-authorHibiki Takemura;Mako Ishida;Tomoko Yonezawa2021/3/16~2021/3/17
Academic presentationUnrefereedOtherCo-authorTatsuya Imai;Naoto Yoshida;Tomoko Yonezawa;Yu Enokibori;Kenji Mase2021/3/9~2021/3/10
Academic presentationUnrefereedOtherCo-authorMako Ishida;Hibiki Takemura;Tomoko Yonezawa2021/3/9~2021/3/10
International academic conferenceAR avatar separated from lecturer for individual communication in one-to-many communicationIn refereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaICAT-EGVE 2020DOI: 10.2312/egve.20201276, pp. 15-16,2020/12/2~2020/12/4
Magazine articleUnrefereedOtherCo-authorTomoko YonezawaSF Magazine 2020.122020/12~
International academic conferenceStimulation of Learning Motivation by Multiple Agents in Group TrainingIn refereedOtherCo-authorCong Wang;Naoto Yoshida;Xin Wan;Tomoko YonezawaHAI 2020DOI: https://doi.org/10.1145/3406499.3415069, pp.25--312020/10~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Hayao Hirano;Tomoko Yonezawa;Yu Enokibori;Kenji Mase2020/9/29~2020/9/30
Academic presentationUnrefereedOtherCo-authorKensei Oyamada;Yuichi Okada;Tomoko YonezawaIPSJ Kansai 20202020/9~
Academic presentationUnrefereedOtherCo-authorYaze Zhang;Xin Wan;Tomoko YonezawaIPSJ Kansai 20202020/9~
PapersInstinctive expressions through involuntary representation on robot's haptic skinIn refereedAcademic JournalCo-authorXiaoshun Meng;Naoto Yoshida;Xin Wan;Tomoko YonezawaHISJ JournalVol.22, No.3, p235-2502020/8~
Academic presentationUnrefereedOtherCo-authorYuichi Okada;Tomoko YonezawaIEICE HCS Tech Meeting.2020/8~
Academic presentationUnrefereedOtherCo-authorXin Wan;Tomoko YonezawaIEICE HCS Tech Meeting.2020/8~
International academic conferencePartner Agent Showing Continuous and Preceding Daily Activities for Users Behavior ModificationIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Keiichiro Nagao;Xin WanHCII 20202020/7~
International academic conferenceBasic Study of Wall-projected Humanitude Agent for Pre-care Multimodal InteractionIn refereedOtherCo-authorXin Wan;Tomoko YonezawaHCII 20202020/7~
International academic conferenceAnalysis of Effects on Postural Stability by Wearable Tactile Expression MechanismIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaHCII 20202020/7~
Academic presentationUnrefereedOtherCo-authorCong Wang;Naoto Yoshida;Tomoko YonezawaIEICE ET Tech meeting2020/6~
Academic presentationUnrefereedOtherCo-authorXin Wan;Tomoko YonezawaHAI Symposium 20202020/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Kaede Ueno;Kenji Mase;Tomoko YonezawaHAI Symposium 20202020/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Cong Wang;Kazuki Umeda;Kenji Mase;Tomoko YonezawaHAI Symposium 20202020/3~
Academic presentationUnrefereedOtherCo-authorYaze Zhang;Xin Wan;Tomoko YonezawaHAI Symposium 20202020/3~
Academic presentationUnrefereedOtherCo-authorCong Wang;Xin Wan;Tomoko YonezawaHI tech. meeting2020/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Yu Enokibori;Kenji Mase;Hayao Hirano;Tomoko YonezawaHI tech. meeting2020/3~
Academic presentationUnrefereedOtherCo-authorNaoya Itoh;Yuichi Okada;Tomoko YonezawaIPSJ ICS Tech. meeting2020/3~
Chapter or SectionUnrefereedAcademic JournalSingle-AuthorTomoko YonezawaIPSJ Magazine2020/1~
Magazine articleUnrefereedAcademic JournalCo-authorKyuri Yamada;Kentaro Fukuchi;Hirotaka Osawa;Doujin Miyamoto;Koichiro Eto;Itaru Kuramoto;Junji Watanabe;Taro Maeda;Yumi Nakamura;Yuki Terashima;Atsushi Kato;Tomoko Yonezawa;Masahiro Shiomi;Ryouma Niiyama;Takashi Miyamoto;Yuta Mizuno;Sho SakuraiIPSJ Magazine2020/1~
PapersEffectiveness of acoustic AR-TA agent using localized footsteps corresponding to audience members' attitudesIn refereedAcademic JournalCo-authorYuki Kitagishi;Tomoko YonezawaInt. J. of Simulation and Process Modeling. Inderscience.Special Issue on “Virtual and Augmented Reality in Industry & Logistics,” Vol.15 No.62020~
CommentaryAnthropomorphic Intermediation of Robots and Agents for Communication AssistanceUnrefereedAcademic JournalSingle-AuthorTomoko Yonezawa2019/11~
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Xin Wan;Tomoko YonezawaIEICE HCS研究会2019/10/26~Tokyo, JAPAN
International academic conferenceEmotional Gripping Expression of a Robotic Hand as Physical ContactIn refereedOtherCo-authorXiaoshun Meng;Naoto Yoshida;Tomoko YonezawaHAI 2019pp. 37–42, DOI: https://doi.org/10.1145/3349537.33518842019/10~Kyoto, Japan
International academic conferenceAgent's Internal State Expression by Combining Its Desiring Behaviors and HeartbeatIn refereedOtherCo-authorNaoto Yoshida;Kaede Ueno;Tomoko YonezawaHAI2019 posterpp. 226–228, DOI: https://doi.org/10.1145/3349537.33527732019/10~Kyoto, Japan
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHuman Interface Simposium2019/9~Doshisha University
Academic presentationUnrefereedOtherCo-authorNaoya Itoh;Tomoko YonezawaIPSJ Kansai 20192019/9~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorTsuyoshi Tabata;Tomoko YonezawaIPSJ Kansai 20192019/9~Osaka Univ. Nakanoshima Center
Academic presentationEffect of The Mult-point Vibtation and Position to The Sole and Instep of Foot on Vibration Perception.UnrefereedOtherCo-authorNaoto Yoshida;Hayao Hirano;Tomoko Yonezawa;Yu Enokibori;Kenji MaseIEICE MVE Tech meeting vol. 119, no. 190, MVE2019-19, pp. 73-78,2019/8~Nagoya, Japan
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaIEICE ET Tech meeting信学技報, vol. 119, no. 43, ET2019-7, pp. 37-422019/5/16~Toyama Univ.
International academic conferencePreliminary Experiment on Shareable and Portable Voice Sticky using Sound OrientationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeIEEE VR NeuroVirt WSDOI: 10.1109/VR.2019.87983022019/3/23~大阪
International academic conferenceJapanese Tea Ceremony Experience with Multimodal AR Expressing Mental ConcentrationIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Nanase IshikawaIEEE VR NeuroVirt WSDOI: 10.1109/VR.2019.87979642019/3/23~大阪
Academic presentationUnrefereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaHAI Domestic Symposium 2018G-15 (13 pages)2019/3/8~2019/3/9東京
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHAI Domestic Symposium 2018G-16 (9 pages)2019/3/8~2019/3/9東京
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI Domestic Symposium 2018P-25 (6 pages)2019/3/8~2019/3/9東京
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI Domestic Symposium 2018P-24 (6 pages)2019/3/8~2019/3/9東京
Academic presentationUnrefereedOtherCo-authorXin Wan;Tomoko YonezawaHAI Domestic Symposium 2018P-20 (10 pages)2019/3/8~2019/3/9東京
Academic presentationUnrefereedOtherCo-authorNaoya Itoh;Tomoko YonezawaIEICE HCS信学技報, vol. 118, no. 487, HCS2018-71, pp. 25-30,2019/3~北海道
Academic presentationEffectiveness of switching target of speech using two kind microphones in one-to-many communicationUnrefereedOtherCo-authorYuki Kitagishi;Yuki Tanaka;Tomoko YonezawaIEICE ET Tech meeting vol. 118, no. 427, ET2018-78, pp. 7-12, 2019/1/26~2019/1/26大阪
Academic presentationAR shadow-clone agent of lecturer for promoting individual communicationUnrefereedOtherCo-authorYuki Kitagishi;Yuki Tanaka;Tomoko YonezawaIEICE MVE Tech meetingPRMU2018-105, pp. 107 - 1152019/1/17~2019/1/18京都
Academic presentationSupporting co-eating communication with switching desk-around AR environment using translucent partitionsUnrefereedOtherCo-authorYipeng He;Shoko Tsujino;Tomoko YonezawaIEICE MVE Tech meetingPRMU2018-104, pp. 101 - 1072019/1/17~2019/1/18京都
Chapter or SectionUnrefereedMonographSingle-AuthorTomoko YonezawaCMC Publication [Emotion and Thinking Sensing Technology in the Internet of Humans]2019~
International academic conferenceAnalyses of Textile Pressure-map Sensor Data of a Stuffed Toy for Understanding Human Emotional Physical ContactIn refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHuman Agent Interaction 2018pp.191--1982018/12/15~2018/12/18Southampton, UK
International academic conferenceAttracting Attention and Changing Behavior toward Wall Advertisements with a Walking Virtual AgentIn refereedOtherCo-authorNaoto Yoshida;Sho Hanasaki;Tomoko YonezawaHuman Agent Interaction 2018pp.61--662018/12/15~2018/12/18Southampton, UK
International academic conferenceArousal and Valence in Robot's Emotional Expression of Breathing and HeartbeatIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHuman Agent Interaction 2018 (HAI2018)pp.330--332, 2018/12/15~2018/12/18Southampton, UK
International academic conferenceInternal Flow Model and Behavioral Design for an Artificial Agent's Ownership-desire ModelIn refereedOtherCo-authorKaede Ueno;Naoto YoshidaHuman Agent Interaction 2018 (HAI2018)pp.362--364, 2018/12/15~2018/12/18Southampton, UK
International academic conferencePreliminary Examination of Walking Stability Improvement by Wearable Tactile Expression MechanismIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaHuman Agent Interaction 2018 (HAI2018)pp.350--352,2018/12/15~2018/12/18Southampton, UK
International academic conferenceVerification of Discussion-Stimulating System for Online Creative Meetings using Key Phrases and Mind MapsIn refereedOtherCo-authorYipeng He;Tomoko YonezawaSCIS-ISIS 2018pp. 967--9742018/12/5~2018/12/8Toyama, Japan
International academic conferenceSwitching Target of Speech between Whole and Particular Audiences using Face Direction and Two MicrophonesIn refereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaUAC2018 (International Symposium on Universal Acoustical Communication 2018)poster 2-26 (2 pages)2018/10/22~2018/10/24Sendai, Japan
Academic presentationUnrefereedOtherCo-authorNaoya Itoh;Tomoko Yonezawa;Yuichi Okada;Yuji NakagawaG-05 (4 pages)2018/9/30~
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaD-103 (7 pages)2018/9/30~
International academic conferenceAcoustic AR-TA Agent using Footsteps in Corresponding to\Audience Members' Participating AttitudesIn refereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaVARE2018pp. 113-1222018/9/17~2018/9/21Budapest, Hungary
Academic presentationPreliminary Design of Internal and Acting Models of Agent’s Ownership DesireUnrefereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaIEICE HCSHCS2018-32, pp.1--6,2018/8/26~2018/8/27
Academic presentationArousal and Valence in Robot's Emotional Expression by Artificial Physiological PhenomenaUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaIEICE HCSHCS2018-33, pp.7--12,2018/8/26~2018/8/27
CommentaryPart1, V. Human-agent InteractionUnrefereedOtherCo-authored chapterTomoko YonezawaNDLPerspectives on Artificial Intelligence/Robotics and Work/Employment2018/7~Hiromitsu Hattori Ed.
http://hdl.handle.net/10367/11072
Academic presentationUnrefereedOtherCo-authorHirofumi Watanabe;Yu Enokibori;Tomoko Yonezawa;Kenji MaseDICOMO 2018pp 129 - 1362018/7~
PapersLecturer’s Understanding in Large Classroom by Overlapped Color Map Based on Estimation of Audience’s AttitudesIn refereedAcademic JournalCo-authorYuki Kitagishi;Tomoko YonezawaIEICE Japanese Journal (Society D), Vol.J101-D,No.6,pp.944-9572018/6~
PapersAR Projection System Enhancing Visual Guidance Effect of the Pointing Gesture of Audience in LectureIn refereedAcademic JournalCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaIEICE Japanese Journal (Society D), Vol.J101-D,No.6,pp.932-9432018/6~
Academic presentationVirtual Convex Segment by Vibration Actuator Array Revised by Different Sensitivities in a SoleUnrefereedOtherCo-authorNaoto Yoshida;Hayao Hirano;Yu Enokibori;Tomoko Yonezawa153rd HI Tech-meeting (SIG-ACI-21)SIG-ACI-21, pp.43--50,2018/3/27~Kyoto Institute of Technology
Academic presentationPreceding movement of humanitude virtual agent into user's FOV before conversation supporting dementia elderly peopleUnrefereedOtherCo-authorXin Wan;Xiaoshun Meng;Kaede Ueno;Tomoko Yonezawa153rd HI Tech-meeting (SIG-ACI-21)SIG-ACI-21, pp.17--22,2018/3/27~Kyoto Institute of Technology
Academic presentationAR Shadow-clone agent system for speaker in one-to-many communicationUnrefereedOtherCo-authorYuki Tanaka;Yuki Kitagishi;Tomoko Yonezawa153rd HI Tech-meeting (SIG-ACI-21)SIG-ACI-21, pp.37--42,2018/3/27~Kyoto Institute of Technology
Academic presentationSwitching multiple desks environments using semitransparent partition screen and sound localizationUnrefereedOtherCo-authorShoko Tsujino;Yipeng He;Kaede Ueno;Naoto Yoshida;Tomoko Yonezawa153rd HI Tech-meeting (SIG-ACI-21)SIG-ACI-21, pp.23--28,2018/3/27~Kyoto Institute of Technology
LectureUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaSociety for serviceology 6th conferenceOS4-072018/3/10~2018/3/11Meiji University
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaInteraction 20182P102018/3/5~2018/3/7National Institute of Inrormatics
CommentaryPart1, V. Human-agent InteractionUnrefereedOtherCo-authored chapterTomoko YonezawaNDLPerspectives on Artificial Intelligence/Robotics and Work/Employment2018/3~Hiromitsu Hattori Ed.
PapersIn refereedAcademic JournalCo-authorNaoto Yoshida;Tomoko YonezawaIEICE Japanese Journal (Society D), Vol.J101-D,No.02,pp.263-2742018/2~
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Tomoya Yamana;Tomoko Yonezawano.202017/12/16~Doshisha University
International academic conferenceEnhancing pointing gestures using an automatic projection systemIn refereedOtherCo-authorKaede Ueno;Naoto Yoshida;Yuki Kitagishi;Tomoko YonezawaACIS2017pp. 161-1642017/12/12~2017/12/14Phnom Penh, Cambodia
International academic conferenceLecture support system for understanding an audience's attitudes using optical flow and overlapped color mappingIn refereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaACIS2017pp. 145-1482017/12/12~2017/12/14Phnom Penh, Cambodia
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI Symposium 2017 D-3 (11 pages), 討論セッション招待, 2017/12/11~2017/12/12The Kanazawa Theatre
Academic presentationUnrefereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaHAI Symposium 2017P14 (9 pages),2017/12/11~2017/12/12The Kanazawa Theatre
Academic presentationUnrefereedOtherCo-authorHayao Hirano;Naoto Yoshida;Yu Enokibori;Tomoko YonezawaIPSJ SIG-AAC the 5th Tech meetingVol.2017-AAC-5, No.15, pp1--8,2017/12/8~2017/12/9Tokyo Captal University
Academic presentationAnalyses of Creative Discussion Stimulating System using Key Phrases and Mind-map based on Online E-PapersUnrefereedOtherCo-authorYipeng He;Tomoko YonezawaIEICE ET Tech meetingVol.117, No.335, ET2017-72, pp 21--26,2017/12/2~Kanazawa Institue of Technology
Academic presentationEffects of Footstep AR Agent as TA for Each Different Individuals in Multiple AudiencesUnrefereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaIEICE ET Tech meetingVol.117, No.335, ET2017-73, pp 27--32,2017/12/2~Kanazawa Institue of Technology
International academic conferenceIndirect control of user's e-learning motivation by controlling activity ratio of multiple agentsIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Kaoru MaedaHAI2017pp 27--34,2017/10/17~2017/10/20Bielefeld, Germany
International academic conferencePhysiological Expression of Robots Enhancing Users' Emotion in Direct and Indirect CommunicationIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI2017 posterpp 505--509,2017/10/17~2017/10/20Bielefeld, Germany
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa大阪大学サイエンスカフェ 第18回ひとこといちば2017/9/26~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaIPSJ-Kansai2017B--102, (6 pages),2017/9/25~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorKaede Ueno;Tomoko YonezawaIPSJ-Kansai2017D--102, (7 pages),2017/9/25~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaIPSJ-Kansai2017D-10 (8 pages)2017/9/25~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Tomoko YonezawaIPSJ-Kansai2017B--103, (6 pages),2017/9/25~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHIS20175T-d3-112017/9/4~2017/9/4Osaka Institute of Technology
Academic presentationPressure map data analyses of textile-type sensor for classification of physical contact pattern on stuffed toyUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeIEICE VNV-HCSvol.2017-HCS-117, vol. 117, no. 177, HCS2017-53, pp. 35--40,2017/8/20~2017/8/21Seikei Univ, Tokyo
LectureVoisticky: Sharable and Portable Auditory Balloon with Voice Sticky Posted and Browsed by User's Head DirectionUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeThe 1st International Scientific Conference on Hospitality and its Applications (ISCHA)2017/8/8~2018/8/9Wakuu Shimoderacho, Osaka
Academic presentationUnrefereedOtherCo-authorHirofumi Watanabe;Yu Enokibori;Tomoko Yonezawa;Kenji MaseIPSJ-Tech-UBIIPSJ UBI/ASD Tech meeting, 2017-ASD-9, pp. 1--6,2017/8~Nagoya University
International academic conferenceHaptic interaction design for physical contact between a wearable robot and the userIn refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHCII2017, Springer International Publishing Switzerlandpp.476--4902017/7/9~2017/7/14Vancouver, CANADA
International academic conferenceA tactile expression mechanism using pneumatic actuator array for noti cation from wearable robotsIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaHCII2017, Springer International Publishing Switzerlandpp.466--4752017/7/9~2017/7/14Vancouver, CANADA
International academic conferenceEstimating Emotion of User via Communicative Stuffed-toy Device with Pressure Sensors Using Fuzzy ReasoningIn refereedOtherCo-authorTomoko Yonezawa;Haruka Mase;Hirotake Yamazoe;Kazuki JoeURAI2017P2-792017/6/28~2017/7/1Jeju, Korea
Academic presentationUnrefereedOtherCo-authorYipeng He;Naoto Yoshida;Kaede Ueno;Tomoko YonezawaIPSJ CLEvol.2017-CLE-21, no.13, pp.1--6,2017/3/21~2017/3/22Kyoto University
Academic presentationUnrefereedOtherCo-authorTomoya Yamana;Yuki Kitagishi;Tomoko YonezawaIPSJ CLEvol.2017-CLE-21, no.14, pp.1--7,2017/3/21~2017/3/22Kyoto University
Academic presentationIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaInteraction 2017Premium presentation, 1-6F-01, pp.106--1092017/3/2~2017/3/4Meiji University
LectureUnrefereedOtherSingle-AuthorTomoko YonezawaTGL T-lab Professional2017/1/31~
LectureUnrefereedOtherCo-authorNaoto Yoshida2017/1/31~2017/1/31
LectureUnrefereedOtherCo-authorKaede Ueno2017/1/31~2017/1/31
Academic presentationUnrefereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaIEICE ETET2016-90, pp 63--682017/1/28~National Institute of Special Needs Education
Academic presentationUnrefereedOtherCo-authorYuki Kitagishi;Tomoko YonezawaIEICE ETET2016-79, pp 7--112017/1/28~National Institute of Special Needs Education
Academic presentationEmotion Estimation Using Pressure Sensors in Communicative Stuffed-toy Device with Fuzzy ReasoningUnrefereedOtherCo-authorHaruka Mase;Tomoko Yonezawa;Kazuki JoeIPSJ MPS2016-MPS-111, no. 6, pp.1--62016/12/12~Tokyo, JAPAN
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa2016/12/10~Hiroshima Institute of Technology
International academic conferenceGroveling on the Wall: Interactive VR Attraction using Gravity Illusion SIGGRAPH ASIA 2016 poster (2 pages) 2016In refereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaSIGGRAPH ASIA 2016poster (2pages)2016/12/5~2016/12/8Macao, China
International academic conferenceVirtual Ski Jump: illusion of slide down the slope and glidingIn refereedOtherCo-authorNaoto Yoshida;Kaede Ueno;Yusuke Naka;Tomoko YonezawaSIGGRAPH ASIA 2016poster (2pages)2016/12/5~2016/12/8Macao, China
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Naoto Yoshida;Tomoko YonezawaHAI symposium 2016P-202016/12/3~
International academic conferenceIntegrating auditory space for multiple people in real world using their personal devicesIn refereedOtherCo-authorTomoko Yonezawa;Yosuke Ino;Naoto Yoshida;Yuki KitagishiUV 2016II-2-5, 5 pages2016/10/7~Nagoya
International academic conferenceStepwise Experience Design of Tactile Interaction in Children's EnrobotmentIn refereedOtherSingle-AuthorTomoko YonezawaHAI 2016 WSEnrobotment WS, 3 pages2016/10/4~Singapore
International academic conferenceEvaluation of Schedule Managing Agent among Multiple Members with Representation of Background NegotiationsIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Kunihiko FujiwaraHAI 2016pp.305--3132016/10~Singapore
International academic conferenceInvestigating Breathing Expression of a Stuffed-Toy Robot Based on Body-Emotion ModelIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI 2016pp.139--1452016/10~Singapore
International academic conferenceAccelerating Physical Experience of Immersive and Penetrating Music by Vibration-motor Array in a Wearable Belt SetIn refereedOtherCo-authorTomoko Yonezawa;Shota Yanagi;Naoto Yoshida;Yuki IshikawaIFIP ICEC 2016Springer LNCS 9926, pp.173--1872016/9/28~2016/9/30Vienna, Austria
Academic presentationUnrefereedOtherCo-authorMakoto Yamane;Haruna Tanaka;Tomoko YonezawaG-112 (9 pages),2016/9/26~
Academic presentationUnrefereedOtherCo-authorAoi Serikawa;Naoto Yoshida;Tomoko YonezawaD-103 (5 pages)2016/9/26~
Academic presentationUnrefereedOtherCo-authorShogo Maeda;Naoto Yoshida;Tomoko YonezawaD-104 (6 pages),2016/9/26~
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Tomoko YonezawaB-01 (9 pages),2016/9/26~
Academic presentationUnrefereedOtherCo-authorHiroaki Ueda;Tomoko YonezawaB-02 (9 pages),2016/9/26~
Academic presentationUnrefereedOtherCo-authorYuto Nishiumi;Naoto Yoshida;Tomoko YonezawaB-101 (6 pages),2016/9/26~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaD-105 (6 pages),2016/9/26~
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa感性ロボティクスの未来セッション,F21 (3pages)2016/9/9~
International academic conferenceSeamless Change of Modality Volume in Observation of Elderly Daily LivesIn refereedOtherCo-authorTomoko Yonezawa;Yusuke NakaICServn2016pp.73--802016/9/6~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Tomoko YonezawaHI tech. meetingSIG-ACI-17, Vol.18, No.1, pp. 61--68,2016/3/28~2016/3/28
Academic presentationUnrefereedOtherCo-authorRyosuke Matsuoka;Naoto Yoshida;Tomoko YonezawaHI tech. meetingSIG-ACI-17, Vol.18, No.1, pp. 1--10,2016/3/28~2016/3/28
Academic presentationUnrefereedOtherCo-authorSho Hanasaki;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. meeting ICSvol.2016-ICS-183, no. 12, pp. 1--82016/3/16~2016/3/16
Academic presentationUnrefereedOtherCo-authorHiroaki Ueda;Xiaoshun Meng;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. meeting ICSvol.2016-ICS-183, no. 3, pp. 1--9,2016/3/16~2016/3/16
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Tomoko YonezawaIPSJ Tech. meeting ICSvol.2016-ICS-183, no. 4, pp. 1--8,2016/3/16~2016/3/16
Academic presentationUnrefereedOtherCo-authorKeiichiro Nagao;Kunihiko Fujiwara;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. meeting ICSvol.2016-ICS-183, no. 10, pp. 1--8,2016/3/16~2016/3/16
Academic presentationUnrefereedOtherCo-authorKaoru Maeda;Kunihiko Fujiwara;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. meeting ICSvol.2016-ICS-183, no. 9, pp. 1--8,2016/3/16~2016/3/16
Academic presentationSupporting speaker's understanding using color-overlapped image based on estimation of audience participationUnrefereedOtherCo-authorYuki Ishikawa;Yusuke naka;Tomoko YonezawaIEICE Tech. Meeting (ET)ET2015--117, pp.129--1362016/3/5~IEICE Tech. Meeting (ET)
Academic presentationAuditory localization in closed space by synchronization algorithm of multiple portable devicesUnrefereedOtherCo-authorYosuke Ino;Yuki Ishikawa;Yusuke Naka;Tomoko YonezawaIEICE Tech Meeting (EA)vol.115, no.424, EA2015-58, pp. 19--26,2016/1/28~2016/1/29
PapersIn refereedAcademic JournalCo-authorNaoto Yoshida;Tomoko YonezawaIEICE Japanese Journal (Society D), Vol.J99-D, No.9, pp.915--9252016~
International academic conferenceWearable robot that measures user vital signs for elderly care and supportIn refereedOtherCo-authorHirotake Yamazoe;Tomoko Yonezawa9th EAI International Conference on Bio-inspired Information and Communications TechnologiesPages 53-572015/12/3~2015/12/5New York City, USA
International academic conferenceDesign of Pet Robots with Limitations of Lives and Inherited CharacteristicsIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Kento Kuboshima9th EAI International Conference on Bio-inspired Information and Communications TechnologiesPages 69-732015/12/3~2015/12/5New York City, USA
International academic conferenceBreathing Expression for Intimate Communication Corresponding to the Physical Distance and Contact between Human and RobotIn refereedOtherCo-authorNaoto Yoshida;Yukari Nakatani;Tomoko Yonezawa9th EAI International Conference on Bio-inspired Information and Communications TechnologiesPages 65-692015/12/3~2015/12/5New York City, USA
Academic presentationUnrefereedOtherCo-authorMeng Xiaoshun;Naoto Yoshida;Tomoko YonezawaHAI Symposium 2015P18, pp.165--1702015/12~2015/12
Academic presentationUnrefereedOtherCo-authorHiroaki Ueda;Meng Xiaoshun;Tomoko YonezawaHAI Symposium 2015P22, pp.192--1972015/12~2015/12
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHAI Symposium 2015P27, pp.216--2212015/12~2015/12
International academic conferenceEvaluations of Involuntary Crossmodel Expressions on the Skin of a Communication RobotIn refereedOtherCo-authorMeng Xiaoshun;Naoto Yoshida;Tomoko YonezawaUbiquitous Robots and Ambient Intelligence 2015TC4-4, pp. 347--3522015/10/28~2015/10/30Goyang, KOREA
International academic conferenceDirection indication mechanism by pulling user's cloth for wearable message robotIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaICAT-EGVE 2015P2 (4 pages)2015/10/28~2015/10/30Kyoto, Japan
International academic conferenceSpatial Communication and Recognition in Human-agent Interaction using the Motion Parallax-based 3DCG Virtual AgentIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaHuman Agent Interaction 2015pp.97--1032015/10/21~2015/10/24TAEGU, KOREA
Academic presentationUnrefereedOtherCo-authorRyota Fuwa;Yusuke Naka;Naoto Yoshida;Tomoko YonezawaIPSJ Kansai Conf.G-12 (6 pages)2015/9/28~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorHikaru Komori;Naoto Yoshida;Tomoko YonezawaIPSJ Kansai Conf.C-09 (6 pages)2015/9/28~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorNaoya Okamoto;Naoto Yoshida;Tomoko YonezawaIPSJ Kansai Conf.G-14 (8 pages)2015/9/28~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorKeiichiro Nagao;Naoto Yoshida;Tomoko YonezawaIPSJ Kansai Conf.C-08 (7 pages)2015/9/28~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Tomoko YonezawaIPSJ Kansai Conf.C-10 (6 pages)2015/9/28~Osaka Univ. Nakanoshima Center
Academic presentationUnrefereedOtherCo-authorShota Yanagi;Naoto Yoshida;Tomoko YonezawaIPSJ Kansai Conf.G-13 (8 pages)2015/9/28~Osaka Univ. Nakanoshima Center
International academic conferenceWearable robot with vital sensors for elderly care and supportIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaROMAN 2015 Interactive SessionIS-122015/9/2~Kobe, Japan
International academic conferenceCrossmodal Combination among Verbal, Facial, and Flexion Expression for Anthropomorphic AcceptabilityIn refereedOtherCo-authorTomoko Yonezawa;Naoto Yoshida;Jumpei NishinakaROMAN 2015pp.549-5542015/9/1~Kobe, Japan
Artistic workOtherCo-authoraoi-chan (team of students)IVRC20152015/9~2015/10
Artistic workOtherCo-authorninoude-hiko-tai (team of students)IVRC20152015/9~2015/10
Academic presentationSupport for Building User's Mindmap by Conversational Agent Moving Around NodesUnrefereedOtherCo-authorYuto Nishiumi;Naoto Yoshida;Tomoko YonezawaIEICE Tech Meeting HCSpp.1--52015/8/21~Ritsumeikan Univ., Kyoto, Japan
Academic presentationDesign of Virtual Agent Estimating Multiple Persons' Possession of ObjectsUnrefereedOtherCo-authorKaede Ueno;Naoto Yoshida;Tomoko YonezawaIEICE Tech Meeting HCSpp.7-122015/8/21~Ritsumeikan Univ., Kyoto, Japan
Academic presentationManagement and negotiation agent showing nonverbal behaviors among other members' presenceUnrefereedOtherCo-authorKunihiko Fujiwara;Naoto Yoshida;Tomoko YonezawaIEICE Tech Meeting HCSpp.13-182015/8/21~Ritsumeikan Univ., Kyoto, Japan
International academic conferenceIndirect Monitoring of Cared Person by Onomatopoeic Text of Environmental Sound and User's Physical StateIn refereedOtherCo-authorYusuke Naka;Naoto Yoshida;Tomoko YonezawaHCII2015, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.):DAPI 2015, LNCS 9189, pp.506-5172015/8/5~Los Angeles, USA
International academic conferenceAuditory browsing interface of ambient and parallel sound expression for supporting one-to-many communicationIn refereedOtherSingle-AuthorTomoko YonezawaHCII2015, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.):DAPI 2015, LNCS 9189, pp.224-2362015/8/5~Los Angeles, USA
International academic conferenceEvaluating Elements of Communicative Stuffed-toy Device Describes Scripts on SNSIn refereedOtherCo-authorHaruka Mase;Tomoko Yonezawa;Kazuki JoePDPTA 2015pp.310-3162015/7/28~Las Vegas, USA
Academic presentationUnrefereedOtherCo-authorHaruka Mase;Tomoko Yonezawa;Kazuki JoeIPSJ MPS Tech Meeting2015-MPS-104, vol.13, pp1-42015/7/28~Las Vegas, USA
Academic presentationUnrefereedOtherCo-authorYuki Ishikawa;Tomoko YonezawaThe 18th Meeting on Image Recognition and UnderstandingSS5-392015/7/27~Osaka, Japan
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaThe 18th Meeting on Image Recognition and UnderstandingSS5-132015/7/27~Osaka, Japan
PapersInvestigation of Embedded Text Communication with Onomatopoeia of User's Bodily Motion and Environmental SoundsIn refereedAcademic JournalCo-authorYusuke Naka;Yosuke Ino;Naoto Yoshida;Tomoko YonezawaHISJ Journalvol.17, no.2, pp 97--1062015/5~
Academic presentationUnrefereedOtherCo-authorMadoka Mizutani;Yusuke Asai;Misato Shiojiri;Tomoko Yonezawa118th HISJ tech-meetingSIG-ACI-15, pp. 9--122015/3/26~
Academic presentationUnrefereedOtherCo-authorRyosuke Matsuoka;Naoto Yoshida;Tomoko Yonezawa118th HISJ tech-meetingSIG-ACI-15, pp. 37--402015/3/26~
Academic presentationUnrefereedOtherCo-authorRyota Fuwa;Naoto Yoshida;Tomoko Yonezawa118th HISJ tech-meetingSIG-ACI-15, pp. 33--362015/3/26~
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Keiichiro Nagao;Naoto Yoshida;Tomoko YonezawaIPSJ Tech-meeting(ICS)2015-ICS-179(3), pp.1--82015/3/20~
Academic presentationUnrefereedOtherCo-authorHiroaki Ueda;Yukari Nakatani;Tomoko YonezawaIPSJ Tech-meeting(ICS)2015-ICS-179(4), pp.1--82015/3/20~
Academic presentationUnrefereedOtherCo-authorRiki Ishino;Yosuke Ino;Tomoko YonezawaIPSJ Tech. Meeting EC/MUSVOL.2015-MUS-106, NO.3, pp.1--6,
VOL.2015-EC-35, NO.3, pp.1--6,2015/3/2~2015/3/3
Academic presentationUnrefereedOtherCo-authorSaori Umemoto;Yusuke Naka;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. Meeting EC/MUSVOL.2015-MUS-106, NO.17, pp.1--6,
VOL.2015-EC-35, NO.17, pp.1--6,2015/3/2~2015/3/3
Academic presentationUnrefereedOtherCo-authorShota Yanagi;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. Meeting EC/MUSVOL.2015-MUS-106, NO.3, pp.1--6,
VOL.2015-EC-35, NO.3, pp.1--6,2015/3/2~2015/3/3
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Misato Shiojiri;Tomoko YonezawaIEICE Tech. Meeting WITvol.114, no.447, pp.41--462015/2/13~2015/2/14
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Tomoko YonezawaIEICE Tech. Meeting HCSHCS2014-88, pp.85-902015/1/30~2015/1/31
Academic presentationUnrefereedOtherCo-authorSho Hanasaki;Hikaru Komori;Naoto Yoshida;Tomoko YonezawaIPSJ Tech. Meeting HCIvol.2015-HCI-161, no.4, pp.1--42015/1/14~2015/1/15
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Yusuke Naka;Tomoko YonezawaIPSJ Tech. Meeting HCIvol.2015-HCI-161, no.8, pp.1--72015/1/14~2015/1/15
PapersIn refereedAcademic JournalCo-authorSana Maekawa;Yukari Nakatani;Tomoko YonezawaIEICE Japanese Journal (Society D), Vol.J98-D,No.1, pp. 71--822015/1~This paper proposes an interactive AR puzzle system that enables to learn phonogram characters through a narrative scene design with 3D virtual animals.When the user arranges Hiragana-written AR markers so as to represent a specific animal in a line, a 3D model of animal is displayed on the set of AR markers.Each animal changes the attitude corresponding to the relationship to the other animal in the same scene.In order to design and save the scene and scenario for storytelling, the user can set the weather and so on. This aims at continuous use and study for a system.
PapersEffectiveness of Ownership Expression for Real-world Objects by Facial Expression of Virtual AgentIn refereedAcademic JournalCo-authorNaoto Yoshida;Takuya Furuyama;Tomoko YonezawaIPSJ Journal vol.56 no.1, pp 411--4192015/1~This paper proposes a design of facial expression for a virtual agent in order to express the agent's ownership of real-world object. Ownership of things is common concept for all people; the ownership means posession of a certain object and exclusive rights to keep and use the object. Expressions for ownership are shown when people insist the exclusive rights on the object by their interest feelings with an interest in the negotiations of ownership or the right of use. The process of possession is composed of following four parts: unpossession, possession, abandonment, and desire. We verify the effectiveness of the facial expressions to show the agent's ownership expression.recommended paper
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Naoto Yoshida;Tomoko YonezawaHAI Symposium 2014P-212014/12/13~
Academic presentationUnrefereedOtherCo-authorMadoka Mizutani;Misato Shiojiri;Naoto Yoshida;Tomoko YonezawaHAI Symposium 2014P-222014/12/13~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Tomoko YonezawaHAI Symposium 2014P-242014/12/13~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaHAI Symposium 2014G-192014/12/13~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHAI Symposium 2014P-232014/12/13~
Academic presentationUnrefereedOtherCo-authorYuichi Moriyasu;Yukari Nakatani;Yusuke Naka;Tomoko YonezawaHAI Symposium 2014G-82014/12/13~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Yukari Nakatani;Kento Kuboshima;Tomoko YonezawaHAI Symposium 2014G-22014/12/13~
Academic presentationUnrefereedOtherCo-authorMeng Xiaoshun;Naoto Yoshida;Tomoko YonezawaHAI Symposium 2014G-32014/12/13~
International academic conferenceReal-Time 3D Data Reduction and Reproduction of Spatial Model using Line Detection in RGB ImageIn refereedOtherCo-authorTomoko Yonezawa;Ken UedaSCIS-ISIS 2014pp.727-7302014/12/3~Kokura, Japan
International academic conferenceInteractive Browsing Agent for Novice User with Selective Information in DialogIn refereedOtherCo-authorTomoko Yonezawa;Yukari Nakatani;Naoto Yoshida;Ayaka KawamuraSCIS-ISIS 2014pp.731-7342014/12/3~Kokura, Japan
International academic conferenceShedule Managing Agent among Group Members with Caring ExpressionsIn refereedOtherCo-authorKunihiko Fujiwara;Jumpei Nishinaka;Naoto Yoshida;Tomoko YonezawaSCIS-ISIS 2014pp.1564-15672014/12/3~Kokura, Japan
International academic conferenceAutomatic Acquirement of Toilet map using Wearable CameraIn refereedOtherCo-authorHirotake Yamazoe;Tomoko Yonezawa;Shinji AbeSCIS-ISIS 2014pp.1568-15712014/12/3~Kokura, Japan
International academic conferenceAn Interactive Stuffed-toy Device for Communicative Description on TwitterIn refereedOtherCo-authorHaruka Mase;Yuya Yoshida;Tomoko YonezawaSCIS-ISIS 2014pp.1361-13632014/12/3~Kokura, Japan
International academic conferenceSynchronized AR Environment for Multiple Users Using Animation MarkersIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaVRST2014pp.237-2382014/11/11~
International academic conferencePersonal and Interactive Newscaster Agent based on Estimation of User's UnderstandingIn refereedOtherCo-authorNaoto Yoshida;Miyuki Yano;Tomoko YonezawaHAI2014pp.45--502014/10/29~
International academic conferenceSimplification of Wearable Message Robot with Physical Contact for Elderly's Outing SupportIn refereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaHAI2014pp.35--382014/10/29~
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Tomoko Yonezawa2014-CH-104, no.1, pp.1--6,2014/10/18~
PapersProposal and Evaluation of Toilet Timing Suggestion Methods for the ElderlyIn refereedAcademic JournalCo-authorAiri Tsuji;Tomoko Yonezawa;Hirotake Yamazoe;Shinji Abe;Noriaki Kuwahara;Kazunari MorimotoInternational Journal of Advanced Computer Science and ApplicationsVolume 5 Issue 10, pp.140--1452014/10~Elderly people need to urinate frequently, and when they go on outings they often have a difficult time finding restrooms. Because of this, researching a body water management system is needed. Our proposed system calculates timing trips to the toilet in consideration with both their schedules and the amount of body water needing to be expelled, and recommends using the restroom with sufficient time before needing to urinate. In this paper, we describe the suggested methods of this system and show the experimental results for the toilet timing suggestion methods.
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Yukari Nakatani;Kento Kuboshima;Tomoko YonezawaC052014/9/17~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Naoto Yoshida;Hiroki Kawaguchi;Tomoko YonezawaC022014/9/17~
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Naoto Yoshida;Yukari Nakatani;Tomoko YonezawaC072014/9/17~
Academic presentationUnrefereedOtherCo-authorHiroaki Ueda;Yukari Nakatani;Tomoko YonezawaC032014/9/17~
Academic presentationUnrefereedOtherCo-authorKeisuke Kimura;Yuya Yoshida;Tomoko YonezawaC062014/9/17~
Academic presentationUnrefereedOtherCo-authorTaichi Goda;Yuya Yoshida;Hiroki Kawaguchi;Tomoko Yonezawapp.85-892014/9/12~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Yukari Nakatani;Yuya Yoshida;Yosuke Inoh;Tomoko Yonezawa2014/9/9~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko Yonezawa2014/9/9~
Academic presentationUnrefereedOtherSingle-AuthorTomoko Yonezawa2014/9/9~
Academic presentationUnrefereedOtherSingle-AuthorTomoko Yonezawa2014/9/3~
Academic presentationUnrefereedOtherCo-authorYosuke Ino;Yuya Yoshida;Tomoko Yonezawa1-10-2, pp.1439--14402014/9/3~
Academic presentationUnrefereedOtherCo-authorRiki Ishino;Yusuke Naka;Yuya Yoshida;Yusuke Matsui;Tomoko Yonezawa2014-MUS-104(18), pp.1--62014/8/23~
International academic conferenceA Structure of Wearable Message-robot for Ubiquitous and Pervasive ServicesIn refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeHCII2014, Springer International Publishing Switzerland, N. Streitz and P. Markopoulos (Eds.):DAPI 2014, LNCS 8530, pp.400--4112014/6/25~Hersonisos, GREECE
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Hitomi Nakamura;Tomoko Yonezawapp.1--62014/3/27~
Academic presentationUnrefereedOtherCo-authorNaoya Okamoto;Yosuke Ino;Tomoko Yonezawapp.33--362014/3/27~
Academic presentationUnrefereedOtherCo-authorYuichi Moriyasu;Naoto Yoshida;Yusuke Naka;Tomoko Yonezawapp.37--402014/3/27~
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Naoto Yoshida;Yukari Nakatani;Tomoko Yonezawapp.41--462014/3/27~
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Yukari Nakatani;Naoto Yoshida;Tomoko Yonezawapp.1--62014/3/26~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Miyuki Yano;Tomoko Yonezawapp.13--182014/3/26~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Yukari Nakatani;Tomoko Yonezawapp.7--122014/3/26~
International academic conferenceInvoluntary Expression of Embodied Robot Adopting Goose BumpsIn refereedOtherCo-authorTomoko Yonezawa;Xiaoshun Meng;Naoto Yoshida;Yukari NakataniHRI 2014pp.254--2552014/3/4~Bielefeld, GERMANY
International academic conferenceBreatter: A Simulation of Living Presence with Breath that Corresponds to UtterancesIn refereedOtherCo-authorYukari Nakatani;Tomoko YonezawaHRI 2014pp.256--2572014/3/4~Tokyo, JAPAN
Academic presentationIn refereedOtherCo-authorFumihiro Tomiyasu;Yuki Muramatsu;Tomoko Yonezawa;Takatsugu Hirayama;Kenji MaseInteraction Symposiumpp.290--2952014/2/27~NII
Academic presentationUnrefereedOtherCo-authorTakuya Furuyama;Naoto Yoshida;Tomoko Yonezawavol.113, no.426, pp.117-1222014/2/1~
Academic presentationUnrefereedOtherCo-authorHaruka Mase;Yukari Nakatani;Yuya Yoshida;Tomoko Yonezawavol.113, no.426, pp.175--1802014/2/1~
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Yukari Nakatani;Yuya Yoshida;Naoto Yoshida;Tomoko Yonezawavol.113, no.426, pp.107--1122014/2/1~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Xiaoshun Meng;Tomoko Yonezawavol.113, no.426, pp.113--1162014/2/1~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Tomoko Yonezawavol.113, no.426, pp.181--1862014/2/1~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Yukari Nakatani;Tomoko Yonezawavol.113, no.426, pp.171--1742014/2/1~
Academic presentationUnrefereedOtherCo-authorYosuke Ino;Naoto Yoshida;Yukari Nakatani;Yuya Yoshida;Tomoko YonezawaMVE2013-38, pp.41-442014/1/23~
Academic presentationUnrefereedOtherCo-authorHiroki Kawaguchi;Arisa Hayashi;Yosuke Inoh;Naoto Yoshida;Tomoko YonezawaMVE2013-38, pp.49-522014/1/23~
Academic presentationUnrefereedOtherCo-authorRiki Ishino;Yosuke Ino;Yukari Nakatani;Naoto Yoshida;Tomoko YonezawaMVE2013-38, pp.53--582014/1/23~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Yukari Nakatani;Yuya Yoshida;Tomoko YonezawaMVE2013-38, pp141--1462014/1/23~
Academic presentationUnrefereedOtherCo-authorKeisuke Kimura;Yusuke Naka;Yuya Yoshida;Tomoko YonezawaMVE2013-38, pp.45--482014/1/23~
Academic presentationUnrefereedOtherCo-authorKunihiko Fujiwara;Naoto Yoshida;Yukari Nakatani;Tomoko YonezawaP-12 (6 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeP-10 (6 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorKento Kuboshima;Yuya Yoshida;Yukari Nakatani;Naoto Yoshida;Tomoko YonezawaP-12 (6 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Tomoko YonezawaP-15 (2 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Hiroaki Ueda;Ayaka Kawamura;Tomoko YonezawaP-14 (4 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorMiyuki Yano;Naoto Yoshida;Tomoko YonezawaP-13 (4 pages)2013/12/8~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaP-16 (4 pages)2013/12/8~
PapersIn refereedAcademic JournalCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbePaladyn. Journal of Behavioral RoboticsVolume 4, Issue 2, pp. 113-1222013/12~
Academic presentationUnrefereedOtherCo-authorMiyuki Yano;Naoto Yoshida;Tomoko Yonezawapp.37--422013/11/27~
Academic presentationUnrefereedOtherCo-authorAyaka Kawamura;Yukari Nakatani;Misato Shiojiri;Tomoko Yonezawapp.33--362013/11/27~
Academic presentationUnrefereedOtherCo-authorYuri Fukuda;Yusuke Naka;Yuya Yoshida;Tomoko Yonezawapp.29--322013/11/27~
Academic presentationUnrefereedOtherCo-authorJumpei Nishinaka;Naoto Yoshida;Tomoko Yonezawavol.113, no.283, pp.1-62013/11/9~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Keisuke Kimura;Yukari Nakatani;Tomoko Yonezawavol.113, no.283, pp.69-722013/11/9~
Academic presentationUnrefereedOtherCo-authorSana Maekawa;Yukari Nakatani;Tomoko Yonezawavol.113, no.283, pp.7-122013/11/9~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko Yonezawavol.113, no.283, pp.79-842013/11/9~
International academic conferenceMixticky: a virtual multimedia sticky recordable/browsable\ around user using smart phoneUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaACPR2013pp.637--6412013/11/8~Naha, JAPANIn this paper, we propose "Mixticky," an effective scheme for browsing and recording memorandums like sticky note using three dimentional(3D) virtual space. We recognize various objects and their existences in the real worlds. For intuitively recording and reminding memories in various media, it is important to achieve the browsability and contemporaneousness of the memos as same as real objects even using Smartphones. "Mixticky" allows the user to put and peel off the memos on the virtual balloon around her/him corresponding to each relative direction (ex. front, left, or 45 degree from the front to right). The user can record and browse voice, movie, picture, hand-written draw, and text memos as virtual sticky notes at each direction using the direction of the Smartphone. The system estimates the user's snapping gestures toward the Smartphone as metaphors of "put" and "peel off" motions on the real sticky. The virtual balloon can be rebuild in various scene so that the user can easily continue her/his thinking activity in anywhere.
International academic conferencePhysical Contact using Haptic and Gestural Expressions for Ubiquitous Partner RobotIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Shinji AbeIROS2013pp.5680-56852013/11~Tokyo, JAPANIn this paper, we propose a portable robot to express physical
contacts that are parallel to other modalities.
It enfolds the user's arm in its arms and tapping the user's arm.
The physical contact expressions are generated through a
combination of several haptic stimuli and the robot's anthropomorphic behaviors
based on its internal state.
The aim of our research is building a caregiver-like robot medium.
The system was designed for gentle and delicate communication
between the user and the robot during a user's outings.
The haptic stimuli express warm/cold, patting, and squeezing.
Experimental results show that haptic communicative behaviors of the robot increase
the intelligibility to the robot's messages and familiar impressions to the robot.
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Naoto Yoshida;Tomotsugu Matsuda;Masaki Ogino;Tomoko Yonezawapp.85--902013/10/4~
Academic presentationUnrefereedOtherCo-authorHaruka Mase;Kunihiko Fujiwara;Saori Umemoto;Arisa Hayashi;Tomoko Yonezawapp.161--1642013/10/4~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Yukari Nakatani;Tomoko Yonezawapp.77--842013/10/4~
Academic presentationUnrefereedOtherCo-authorNanase Ishikawa;Hiroki Kawaguchi;Yuya Yoshida;Tomoko Yonezawapp.194--1952013/10/4~
Academic presentationUnrefereedOtherCo-authorKento Yahashi;Nanase Ishikawa;Yuya Yoshida;Tomoko Yonezawapp.246--2482013/10/4~
Academic presentationUnrefereedOtherCo-authorTakuya Furuyama;Naoto Yoshida;Tomoko YonezawaC012013/9/25~
Academic presentationUnrefereedOtherCo-authorAyaka Kawamura;Yukari Nakatani;Tomoko YonezawaC022013/9/25~
Academic presentationUnrefereedOtherCo-authorHaruka Mase;Yuya Yoshida;Tomoko YonezawaC032013/9/25~
Academic presentationUnrefereedOtherCo-authorXiaoshun Meng;Tomoko YonezawaC072013/9/25~
Academic presentationUnrefereedOtherCo-authorJunichi Izutani;Tomoko YonezawaG062013/9/25~
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Tomoko YonezawaG112013/9/25~
Academic presentationUnrefereedOtherCo-authorRiki Ishino;Yusuke Naka;Yuya Yoshida;Tomoko Yonezawa2013-MUS-100(34), pp.1-62013/9/1~
International academic conferenceWearable partner agent with anthropomorphic physical contact with awareness of clothing and postureIn refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeISWC2013pp.77--802013/9~Zurich, SWITZERLANDIn this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the user's clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user's arm and b) it generates emotional expression by strongly enfolding the user's arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent's messages and familiar impressions of the agent.
Artistic workOtherCo-authorgreen lab. (team of students)IVRC20132013/9~2013/10
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko Yonezawapp. 25-282013/8/19~
Academic presentationUnrefereedOtherCo-authorTakashi Kato;Yuya;Yosuke Inoh;Tomoko Yonezawapp.29-302013/8/19~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Tomoko Yonezawapp.31-382013/8/19~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Takuya Furuyama;Tomoko Yonezawapp.39-422013/8/19~
Academic presentationUnrefereedOtherCo-authorJumpei Nishinaka;Naoto Yoshida;Tomoko Yonezawapp.43-462013/8/19~
Academic presentationUnrefereedOtherCo-authorSoma Tanaka;Yuya Yoshida;Naoto Yoshida;Tomoko Yonezawapp.47-502013/8/19~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Keisuke Kimura;Yukari Nakatani;Tomoko Yonezawapp.51-562013/8/19~
International academic conferenceAbotar: An Expressive Method of Web Communication using Appearances of Avatars Attached to Text Messages and RemarksIn refereedOtherCo-authorYukari Nakatani;Tomoko YonezawaiHAI2013II-p62013/8~Sapporo, JAPAN
International academic conferenceSCoViA: Effectiveness of spatial communicative virtual agent based on motion parallaxIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaiHAI2013II-p72013/8~Sapporo, JAPAN
International academic conferenceInvestigation of Object-indicating Behaviors -Between Spacial Difficulty and Robot’s Degree of Freedom-In refereedOtherCo-authorTomoko Yonezawa;Hirotake YamazoeiHAI2013II-p42013/8~Sapporo, JAPANIn this paper, we introduce an expressive method of the robot's "effort" and "hardships" while indicating a particular object. For the popular designs of robots which have few degree of freedom, it should be very important to design delicate but effective behaviors to express additional effort to solve the difficulty. Accordingly, we propose adopting additional and delicate motion of the robot's head in addition to pointing gesture by the robot's arm. The results of our preliminary experiments with subjective and objective experiments showed a) the difference of the object sensation by age, b) the strength of the arm for the object indication, and c) the possibility of robot's expression for "effort" using the additional motion of the robot's face toward left side, differently from the direction of the arm. Finally, we suggest an geometric model of diffculty and the gradual expression of effort corresponding to the difficulty.
International academic conferenceAppearance and Physical Presence of Anthropomorphic Media in Parallel with Non-face-to-face CommunicationIn refereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureiHAI2013III-I-32013/8~Sapporo, JAPANPuppets could become a new tool for expressive communication, in parallel with traditional communication channels in human-human interaction. This research aimed to verify the effectiveness of appearance and embodied presence of anthropomorphic media. In this paper, we focused on a usage of the anthropomorphic medium and the user's conscious or under-conscious behaviors in parallel to non-face-to-face conversation. We conducted a non-face-to-face conversational experiment by adopting a stuffed-toy robot, which allowed expression via motion and vocal cues. The bare-robot condition was prepared to compare the appearace, and the monitor which showed the stuffed-toy was adopted to compare the embodied presence. The analyses of the results showed that the appearance affects on the unconscious behaviors of the user and that the embodied presence affects on the conversational utterances. We conclude the physical embodied presence and appearance of the stuffed-toy robot play an important role in non-verbal communication in non-face-to-face conversation with the robot system.Selected as an Honorable Mentioned Paper
International academic conferenceVisual language communication system with multiple pictograms converted from weblog texts for authoring and browsing dance motion and formationIn refereedOtherCo-authorMisato Shiojiri;Yukari Nakatani;Tomoko YonezawaIASDR201313A-32013/8~Tokyo, JAPANIn this paper, we aim to build a new visual communication method for people who have religion, psychological, or linguistic difficulty, such as on/off-line foreign communication, face-toface conversation between auditory challenged and other people, et al. We propose a communication method using expression of multiple pictograms. We implemented the Web server system that converts English texts into multiple pictograms based on morphological analysis. In the experiment, we examined the level of understanding, intuitiveness, and consent from the viewpoint of following three: (1) the chunk of pictograms as sentence, (2) clarity due to the difference in the linear order of the pictogram, and (3) the method of layout.
International academic conferenceChoreographic design visualization of enormous dancers for authoring and browsing dance motion and formationIn refereedOtherCo-authorYuya Yoshida;Tomoko YonezawaIASDR201305E-32013/8~Tokyo, JAPANIn this paper, we propose a choreographic design visualizing enormous dancers' motion and their total formation. The system aims to not only intuitively preview the total impression but also improve the choreographic design before the real performance of the dancers. We focus on the simultaneous design of both the formation and motion to balance of the total view. Captured motion data by Kinect are obtained as fragments of dance influence. The choreographer first puts multiple CG dancers in a virtual space at the same time by drawing reference lines or equilateral polygons on 3D-coordinate and she/he browses the total view of dance motions from flexible viewpoints. Each motion is assigned to each correspondent group of the dancers based on multiple channels' time-lines. This configuration enables a smooth choreographic procedure for enormous dancers from the viewpoint of both detail and whole. Finally, we verified a) browsability and b) easiness of the formation design in our proposing authoring system.
Academic presentationUnrefereedOtherCo-authorHikaru Omoto;Tomoko Yonezawa2013-EC-28(11), pp.1--62013/5~大阪大学
International academic conferenceAttitude-aware communication behaviors of a partner robot: politeness for the masterIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Yuichi Koyama;Naoto Yoshida;Shinji Abe;Kenji MaseHRI2013 demoD192013/3~Tokyo, JAPANThis paper proposes attitude-aware communication behaviors for daily-partner robot. For appropriate and familiar anthropomorphic interaction, the robot should wait for a timing to talk according to the user's situation, while trying to notify the need to speak to the user (speech-implying behaviors). The proposed robot determines the user's context based on user's gaze and utterance.
International academic conferenceIkitomical Model: extended body sensation through a cardiovascular robotIn refereedOtherCo-authorYuka Nagata;Naoto Yoshida;Tomoko YonezawaHRI2013 demoD202013/3~Tokyo, JAPANIn this research, we proposed a cardiovascular robot that externalizes the user's heartbeats in terms of somatopsychology. "Ikitomical Model" is a device art that consists of a heart model, tubes to show the blood flow, and a pulsebeat sensor in order to show the user's "living" heartbeat in realtime.
This implementation enables to make not only the visual externalization but also auditory and tactile sensations of the user's heartbeat motion.
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Tomoko Yonezawapp.1--62013/3~
Academic presentationUnrefereedOtherCo-authorAyaka Kawamura;Misato Shiojiri;Yukari Nakatani;Junichi Izutani;Tomoko Yonezawapp.17--202013/3~
Academic presentationUnrefereedOtherCo-authorSana Maekawa;Tomoko Yonezawapp.21--242013/3~
Academic presentationUnrefereedOtherCo-authorYusuke Naka;Tomoko Yonezawapp.25--282013/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;;Tomoko Yonezawa2013-ICS-171, vol.8, pp.1-42013/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;;Tomoko Yonezawavol.112:455(HCS2012 76-110) pp.1-42013/3~
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Tomoko Yonezawavol.112:455(HCS2012 76-110) pp.109-1122013/3~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Tomoko Yonezawavol.112:455(HCS2012 76-110) pp.131-1362013/3~
Academic presentationUnrefereedOtherCo-authorKen Ueda;Tomoko YonezawaIEICE-SIG-MVE, vol.112, no. 385, pp. 269--2732013/1~
Academic presentationIn refereedOtherCo-authorYuya Yoshida;Yuka Iwata;Tomoko Yonezawa2012/12~京都工芸繊維大学
Academic presentationIn refereedOtherCo-authorNaoto Yoshida;Tomoko Yonezawa2012/12~京都工芸繊維大学
Academic presentationIn refereedOtherCo-authorYukari Nakatani;Misato Shiojiri;Hitomi Nakamura;Tomoko Yonezawa2012/12~京都工芸繊維大学
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Shinji Abepp.15--182012/8~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Yuichi Koyama;Tomoko Yonezawa;Shinji Abe;Kenji Masepp.19--222012/8~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko Yonezawapp.23--282012/8~
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Yuka Iwata;Tomoko Yonezawapp.29--342012/8~
LectureUnrefereedOtherCo-authorTomoko Yonezawa2012/3~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Tomoko Yonezawa2012-EC-23(2), pp.1--62012/3~
Academic presentationUnrefereedOtherCo-authorYuya Yoshida;Yuka Iwata;Tomoko Yonezawapp. 3--82012/3~
Academic presentationUnrefereedOtherCo-authorNaoto Yoshida;Tomoko Yonezawapp. 9--122012/3~
Academic presentationUnrefereedOtherCo-authorYukari Nakatani;Misato Shiojiri;Hitomi Nakamura;Tomoko Yonezawapp.13--182012/3~
Academic presentationUnrefereedOtherCo-authorMisato Shiojiri;Tomoko Yonezawapp. 19--222012/3~
Academic presentationUnrefereedOtherCo-authorMai Hatano;Tomoko Yonezawa;Naoko Yoshii;Masami Takata;Kazuki Joe2012-MPS-87(33), pp.1--62012/3~
LectureUnrefereedOtherCo-authorTomoko Yonezawa2012/2~
PapersAnthropomorphic awareness of partner robot to user's situation based on gaze and speech detectionIn refereedAcademic JournalCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeInternational Journal of Autonomous and Adaptive Communications Systems Vol. 5, No. 1, 2012.Vol. 5, No. 1, pp 18-382012~This paper introduces a daily-partner robot, that is aware of the user's situation by using gaze and utterance detection. For appropriate anthropomorphic interaction, the robot should talk to the user in proper timing without interrupting her/his task. Our proposed robot 1) estimates the user's context (the target of her/his speech) by detecting her/his gaze the utterance, 2) expresses the need to speak to the user by silent gaze-turns towards the user and the object of joint attention (speech-implying behaviour) and 3) tells the message when the user talks to the robot. Based on preliminary results that show the sufficient human-sensitivity to the speech-implying behaviours of the robot, we evaluate the proposed behavioural model. The results show that the crossmodal awareness is effective for respectful communication that does not disturb the user's ongoing task by silent behaviours that effectively show the robot's intention to speak and draw the user's attention.
International academic conferenceReal-Time Polygon Reconstruction for Digital archives of Cultural PropertiesIn refereedOtherCo-authorMegumi Okumoto;Yuri Iwakata;Asuka Komeda;Tomoko Yonezawa;Masami Takata;Kazuki JoeJSST2012OS9-12 (7 pages)2012~Kobe, JAPANIn this our paper, we propose a polygon reconstruction method for digital archives of cultural properties.
To use digital archives of cultural properties in VR systems for research purpose, it is required that mesh
resolution is changeable on the demand of users. Additionally, polygon reconstruction needs to be executed
in real-time so that users feel comfortable for their demands. To execute polygon reconstruction in real-time,
history of preliminary polygon reductions is adopted to the proposed method. To validate the VR system,
the polygon reconstruction method is evaluated. As the result of the experiment, we con rm that polygon
reconstruction is performed within 1.0 second, which is considered as "real-time" by the de nition of typical
user-interface.
International academic conferenceManipulation of a VR object using user's pre-motionIn refereedOtherCo-authorShiori Mizuno;Asuka Komeda;Tomoko Yonezawa;Naoko Yoshii;Masami Takata;Kazuki JoeJSST2012OS9-13 (7 pages)2012~Kobe, JAPANIn this paper, we propose a new method to manipulate objects in VR. The purpose of the method is to give
users instinctually easy interface to VR with using their natural behaviors as their commands when objects are
manipulated by the users in VR. To capture the natural behaviors as the interface to VR, we present de nitions
to classify the kinds of objects in VR and the motions with which users may perform their behaviors to face
the objects. The de nitions of motions are based on users' initial behaviors. Such initial behaviors are known
as pre-shaping. When a human looks at an object to grasp it, he/she shapes his/her hand according to the
shape of the object in advance. This is called pre-shaping. In a VR system, when an object is presented to
a user, he/she may give his/her pre-shaping if he/she is interested in the presented object. If the VR system
recognizes the pre-shaping for the presenting object, no special commands are required for the user. To realize
this idea, we give de nitions to classify objects and their corresponding and possible pre-shaping motions.
Characteristics of objects are classi ed into shape, situation, size, weight and hardness. Characteristics of
motions are future classi ed into two classes; the number of hands. In the case of just one hand, ngers'
shapes are classi ed. In the case of both hands, the distance between right and left hands is classi ed. Using
the above de nitions, we develop a prototype system to validate the classi cations. Consequently, the de ned
motions are correctly recognized according to the presented objects. It means that manipulation by object
for VR is possible and promising.
International academic conferenceAR based Spatial Reasoning Capacity Training for StudentsIn refereedOtherCo-authorMai Hatano;Tomoko Yonezawa;Naoko Yoshii;Masami Takata;Kazuki JoePDPTA2012Vol.II, pp.751-7572012~LasVegus, USAIn this paper, we propose two methods to train students' spatial reasoning capacity using AR (Augmented Reality). The first method supports students for rotating spatial objects more easily with two AR markers. One marker is used for questions, on which several blocks and a landmark (with a shape of a chick) are displayed. The other marker is used for answers, on which blocks are moved freely. The layout of the blocks toward the chick is selected on the marker. The second method includes limitation of rotation on the marker using some Arduino based hardware. The second method supports students for rotating spatial objects partially. To validate the effect of the trained and resultant spatial reasoning capacity, we perform an experiment using the first method. The analysis results explain the spatial object recognition accuracy increases using the AR learning. To validate the effect of rotation angles, we perform other experiments using the second method. The analysis result shows the rotation angle of sixty degrees is the best for the training of spatial reasoning capacity.
International academic conferenceProposal and Evaluation of the Toilet Timing Suggestion Method for the ElderlyIn refereedOtherCo-authorAiri Tsuji;Tomoko Yonezawa;Hirotake Yamazoe;Noriaki Kuwahara;Kazunari MorimotoICCI*CC 2012pp. 178-1852012~Calgary, CANADAWe are researching and developing the toilet timing suggestion method for the elderly in order to support their comfortable outing because the elderly are likely to experience frequent urination and often encounter the difficult situation of searching for a restroom with holding their water. Our proposed system calculates the toilet timing in consideration with both their outing schedule and the amount of body water of them, and recommends going to a restroom sufficiently before they feel the need of urinating. In order to implement the system, we've come up with the physiological formula for non-invasive estimation of the amount of body water, and based on the estimation, the toilet timing is calculated. Also, we've devised the suggestion method of the toilet timing to the elderly in order both not to interfere their things to do while their outing and not to be ignored by them. In this paper, we describe our proposed system, and show the experimental result for evaluating both the toilet timing calculation and the suggestion method.Best Paper Award
CommentaryUnrefereedOtherCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji Abe画像ラボVol.23, No.6, pp.23-282012~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Shinji AbeII-1B-32011/12~京都工芸繊維大学
Academic presentationIn refereedOtherCo-authorNaoto Yoshida;Tomoko YonezawaIII-1A-62011/12~京都工芸繊維大学
LectureUnrefereedOtherCo-authorTomoko Yonezawa2011/11~
LectureUnrefereedOtherCo-authorTomoko Yonezawa2011/10~
International academic conferenceEstimation of User Conversational States based on Combination of User Actions and Feature NormalizationIn refereedOtherCo-authorHirotake Yamazoe;Yuichi Koyama;Tomoko Yonezawa;Shinji Abe;Kenji Maseconversational state;recognition;user actionACM the 6th Workshop of CASEMANSpp.33-372011/9/18~BeijingIn this paper, we propose a method to estimate such user
conversational states as concentrating/not concentrating. Wepreviously proposed a robot-assisted videophone system tosustain conversations between elderly people. In such videophonesystems, the user conversational situation must be estimated
so that the robot behaves appropriately. The proposedmethod employs i) elemental actions and a combinationof user elemental actions as features for recognitionand ii) the normalization of feature vectors based on the
frequencies of actions. The experimental results show theeectiveness of our method.
International academic conferencePrivacy Protected Life-context-aware Alert by Simplified Sound Spectrogram from Microphone SensorIn refereedOtherCo-authorTomoko Yonezawa;Naoki Okamoto;Hirotake Yamazoe;Shinji Abe;Fumio Hattori;Norihiro HagitaACM the 6th Workshop of CASEMANSpp.4-92011/9/18~BeijingThis paper introduces a design of life-context-aware alert system from multiple small microphone sensors at various places in home. In order to support the comfortable daily lives of elderly people who live alone, it is important to know their daily activities in home without privacy exposure. In the case of their emergency appeared from overwatching data, the system must alert the situation to the hospitals, ambulances, or their families. To reduce data for fast calculation on PIC and to protect their privacy, the system adopts simplified sound spectrogram from each installed microphone modules. The system first analyses these multiple signals to roughly understand what situation occurs, and decides what type of daily-life are found. When the user's life shows emergent situations, the system alerts to the appropriate contact person or institution. This paper especially describes how to simplify the raw data from the microphone sensor with using frequency/time domain for reducing the amount of data and for privacy protection.
International academic conferenceVoisticky: Sharable and Portable Auditory Balloon with Voice Sticky Posted and Browsed by User's Head In refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Hiroko TerasawaIEEE ICSPCC 2011pp. 118-1232011/9/14~2011/9/16Xi'anIn this paper, we introduce an effective scheme for browsing and sharing personal voice memos using three-dimensional (3D) auditory space. We proposed an intuitive framework to record and browse numerous personal voice memos using the user's head directions to post voice memos in user-relative auditory directions for each utterance. The user can define the sharing attribute of each voice memo in the edit mode so that personal and public voice memos are appropriately shared with the permitted users. The shared spaces, which are hemispherical auditory balloons, are overlapped or arranged in separated angles by the number of sharing users. The results of user tests showed that the user could intuitively recognize the existence of whole memos and that there is a possibility of different feelings and cognitions in the voice memos using our proposed scheme compared to conventional voice memos.
LectureUbiquitous generation: change of education for informaticsUnrefereedOtherCo-authorTomoko Yonezawa2011/9~Xian, CHINA
PapersAssisting video communication by an intermediating robot system corresponding to each user's attitudeIn refereedAcademic JournalCo-authorTomoko Yonezawa;Hirotake Yamazoe;Yuichi Koyama;Shinji Abe;Kenji Maseintermediating robot;video communication;conversational attitudeHuman Interface Society JournalVol.3 No.32011/8~Human Interface Society of JapanThis paper proposes a video communication assist system using a companion robot in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication,it is necessary to provide the user's attitude-aware assistance. First, a) the system esti-mates the user's conversational state by a machine learning method. Next, b-1) the robot
appropriately expresses its active listening behaviors, such as nodding and gaze turns,to compensate for the listener's attitude when she/he is not really listening to anotheruser's speech, b-2) the robot shows communication-evoking behaviors (topic provision)to compensate for the lack of a topic, and b-3) the system switches the camera images
to create an illusion of eye-contact, corresponding to the current context of the user'sattitude. From empirical studies and a demonstration experiment, i) both the robot's ac-tive listening behaviors and the switching of the camera image compensate for the otherperson's attitude, ii) elderly people prefer long intervals between the robot's behaviors,
and iii) the topic provision function is effective for awkward silences.
Academic presentationUnrefereedOtherCo-authorHiroko Terasawa;Tomoko Yonezawa;Hirotake YamazoeNO.2-2-132011/3~
Academic presentationUnrefereedOtherSingle-AuthorTomoko YonezawaSIG-DE-05, pp.53--56, 20112011/3~
Academic presentationUnrefereedOtherCo-authorSIG-DE-05, pp.49--52, 20112011/3~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko YonezawaSIG-DE-05, pp.25--29, 20112011/3~
Academic presentationUnrefereedOtherCo-authorAiri Tsuji;Tomoko Yonezawa;Hirotake Yamazoe;Shinji Abe;Noriaki Kuwahara;Kazunari MorimotoSIG-DE-05, pp.21--24, 20112011/3~
PapersAutomatic calibration of 3D eye model for single-camera based gaze estimationIn refereedAcademic JournalCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji AbeIEICE Japanese Journal (Society D), Vol.J94-D, No.6, pp.998-10062011~
LectureUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe2010/12~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Yuichi Koyama;Shinji Abe;Kenji Mase1B-22010/12~慶應大学
International academic conferenceImproving Video Communication for Elderly and Disabled by Coordination of Robot's Active Listening Behaviors and Media ControlsIn refereedOtherCo-authorTomoko Yonezawa;Yuichi Koyama;Hirotake Yamazoe;Shinji Abe;Kenji MaseIEEE IROS 2010pp.1476-14812010/10/18~TaipeiIn this paper, we propose and evaluate a video communication system that compensates for user's uncongenial attitudes by coordinating the robot's behaviors and media control of the video. The system facilitates comfortable video communications between elderly or disabled people by an assistant robot for each user that expresses (a) active listening behaviors to compensate for the listener's attitude when he/she is not really listening to another user's talking and (b) a cover-up behavior (gaze turned to the user) to divert attention from the other user's uncongenial attitude when that person is not looking at the talking user but toward the robot at her/his side; this behavior is performed by coordinating the automatic switching of cameras to give the impression that the congenial person is still looking at the user. The results obtained in the system evaluation show the significant effectiveness of this design approach using the robot's behavior and media control of the video to compensate for the problems in video communication that we aimed to overcome.
International academic conferenceConversational Attitude-aware Behavioral Design for Robot Assistant Combined with Video CommunicationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Yuichi Koyama;Shinji Abe;Kenji MaseThe 5th ACM Workshop of CASEMANSpp. 1-82010/9/26~CopenhagenThis paper proposes a videophone conversation support system by the behaviors of a companion robot and the switching of camera images in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to understand a user's conversational states, which are whether the user is talking (taking the initiative) and whether the user is concentrating on the conversation. First, a) the system estimates the user's conversational state by a machine learning method. Next, b-1) the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listener's attitude when she/he is not really listening to another user's speech, b-2) the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and b-3) the system switches the camera images to create an illusion of eye-contact corresponding to the current context of the user's attitude. From empirical studies, a detailed experiment, and a demonstration experiment, i) both the robot's active listening behaviors and the switching of the camera image compensate for the other person's attitude, ii) the topic provision function is effective for awkward silences, and iii) elderly people prefer long intervals between the robot's behaviors.
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Yuichi Koyama;Hirotake Yamazoe;Akira Utsumi;Shinji Abe;Kenji Mase;Norihiro Hagitapp.39-402010/6~
LectureUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe2010/3~
Academic presentationUnrefereedOtherCo-authorAiri Tsuji;Tomoko Yonezawa;Hirotake Yamazoe;Shinji Abe;Noriaki Kuwahara;Kazunari Morimoto2P1-32010~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Yuichi Koyama;Tomoko Yonezawa;Shinji Abe;Kenji Masepp.34--38(IS1-1)2010~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Yuichi Koyama;Tomoko Yonezawa;Shinji Abe;Kenji MaseSIG-DE-02/SIG-NOI-02, pp.43--462010~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Yuichi Koyama;Hirotake Yamazoe;Shinji Abe;Kenji MaseSIG-DE-02/SIG-NOI-02, pp.37--422010~
Academic presentationUnrefereedOtherCo-authorNaoki Okamoto;Tomoko Yonezawa;Hirotake Yamazoe;Fumio Hattori;Norihiro HagitaIEICE-MVE2010-2, pp.7--122010~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Yuichi Koyama;Hirotake Yamazoe;Shinji Abepp.21-262010~
Academic presentationUnrefereedOtherCo-authorYuichi Koyama;Tomoko Yonezawa;Hirotake Yamazoe;Shinji Abe;Kenji Masepp.27-322010~
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa2009/12~
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa2009/12~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Yuichi Koyama;Akira Utsumi;Shinji Abe1T-2, 2C-32009/12~慶應大学Impressive Experience Award
LectureUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe2009/11~
PapersVerification of Behavioral Designs for Gaze-communicative Stuffed-toy RobotIn refereedAcademic JournalCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeIEICE Japanese Journal (Society D), Vol.J92-D, No.1, pp.81-922009~
International academic conferencePortable Recording/Browsing System of Voice Memos Allocated to User-relative DirectionsIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Hiroko TerasawaPervasive 2009 Adjunct Proceedingspp.241-2442009~Nara, JAPANIn this paper, we propose an intuitive interface to record
and browse personal voice memos using head directions in user-relative
auditory space. This system supports the user’s thinking process by interacting with voice memos within the auditory space in mobile/pervasive
environments, in a manner associated with the user’s mental space. The
user’s utterances in the auditory memo space are recorded at each relative head-direction where the user is facing, which is retrieved by 3D
geomagnetic sensor. In order to facilitate the browsing of multiple voice
memos without confusion, “auditory-icons,” that are symbolic sounds,
are employed to represent the memos in the auditory memo space.
International academic conferenceEvaluating Crossmodal Awareness of Daily-partner Robot to User's Behaviors with Gaze and Utterance DetectionIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeCASEMANS2009pp.1-82009~Nara, JAPANThis paper proposes a daily-partner robot, that is aware of the user's situation or behavior by using gaze and utterance detection. For appropriate and familiar anthropomorphic interaction, the robot should wait for a timing to talk something to the user corresponding to the situation of her/him while she/he doing a task or thinking. According to the need, our proposed robot i) estimates the user's context by detecting her/his gaze and utterance, such as the target of the user's speech, ii) tries to notify the need to speak to the user by silent (i.e. without making an utterance) gazeturns toward the user and joint attention with taking advantage of the attentiveness, and iii) tell the message when the user talks to the robot. The results of experiments combining subjects' daily tasks with/without the above steps show that the crossmodal-aware behaviors of the robot are important in respectful communications without disturbing the user's ongoing task by adopting silent behaviors showing the robot's intention to speak and for drawing the user's attention.Best Paper Award
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa2009~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Hiroko Terasawa2009~
Academic presentationUnrefereedOtherCo-authorYuichi Koyama;Tomoko Yonezawa;Hirotake Yamazoe;Shinji Abe;Kenji Masepp.103-1082009~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Hiroko Terasawapp.75-812009~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji Abeno.56, pp.1-62009~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Tomoko Yonezawa;Hiroko TerasawaVol.2009-UBI-22 No.18, pp.1-82009~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;;Shinji Abe2T-2, 2D-22008/12~慶應大学Impressive Experience Award
Academic presentationIn refereedOtherCo-authorNoriaki Mitsunaga;Tomoko Yonezawa;Taichi Tajikapp. 39-402008/3~NIIInteractive Presentation Award
International academic conferenceIntuitive Page-turning Interface of E-books on Flexible E-paper based on User StudiesIn refereedOtherCo-authorTaichi Tajika;Tomoko Yonezawa;Noriaki MitsunagaACM Multimedia2008pp.793-7962008~Vancouver, CANADAIn this paper we propose an intuitive page-turning and browsing interface of e-books on a flexible e-paper based on user studies. Our user studies showed various types of page-turning actions such as flipping, grasping, and sliding by different situations or users. We categorized these actions into three categories: turn, flip through, and leaf through the page(s). Based on this categorized model, we have developed a conceptual design and prototype of an interface for an e-book reader, which enables intuitive page-turning interactions using a simple architecture in both hardware and software design. The prototype has a flexible plastic sheet with bend sensors, which is attached to a small LCD monitor to physically unite the visual display with a tangible control interface based on the natural page-turning actions as used in reading a real book. The prototype handles all three page-turning actions observed in the user studies by interpreting the bend degree of the sheet.
International academic conferenceGazeRoboard: Gaze-communicative Guide System in Daily Life on Stuffed-toy Robot with Interactive Display BoardIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeIEEE IROS2008pp.1204-12092008~Nice, FRANCEIn this paper, we propose a guide system for daily life in semipublic spaces by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. The system provides naturally anthropomorphic guidance through a) gaze-communicative behaviors of the stuffed-toy robot (ldquojoint attentionrdquo and ldquoeye-contact reactionsrdquo) that virtually express its internal mind, b) voice guidance, and c) projection on the board corresponding to the userpsilas gaze orientation. The userpsilas gaze is estimated by our remote gaze-tracking method. The results from both subjective/objective evaluations and demonstration experiments in a semipublic space show i) the holistic operation of the system and ii) the inherent effectiveness of the gaze-communicative guide.Finalist of Best Application Award (5/1200 submissions)
International academic conferenceEvaluations of Interactive Guideboard with Gaze-communicative Stuffed-toy RobotIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeCOGAIN2008 pp. 53-582008~Prague, CZECH
International academic conferenceSheaf on Sheet: A concept of tangible interface for browsing on a flexible e-paperIn refereedOtherCo-authorTomoko Yonezawa;Noriaki Mitsunaga;Taichi Tajika;Takahiro Miyashita;Shinji AbeSIGGRAPH2008Poster, B1342008~LosAngeles, USAWe propose a tangible interface which uses flip and flex actions on an e-paper by a user of an e-book. The user can have much more natural feeling that they are reading a "book" on an e-paper compared to current e-books that depend on button interfaces since he/she uses the same kinds of actions on a book. Our prototype consists from bend sensors attached to a flexible plastic sheet, a small LCD monitor, a speaker, and softwares on a PC. A user's page-turning motions, such as "curl and flip,""flick through," and "rub and pick up," are converted to the number of pages to flip. The number is sent to an e-book software to present an animation and the sound to show page flips.
International academic conferenceRemote and Head-Motion-Free Gaze Tracking for Real Environments with Automated Head-Eye Model CalibrationsIn refereedOtherCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji AbeIEEE CVPR2008ID 2352008~Anchorage, USAWe propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).
International academic conferenceRemote Gaze Estimation with a Single Camera Based on Facial-Feature Tracking without Special Calibration ActionsIn refereedOtherCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji AbeETRA2008pp.245-2502008~Savannah, USAWe propose a real-time gaze estimation method based on facial-feature tracking using a single video camera that does not require any special user action for calibration. Many gaze estimation methods have been already proposed; however, most conventional gaze tracking algorithms can only be applied to experimental environments due to their complex calibration procedures and lacking of usability. In this paper, we propose a gaze estimation method that can apply to daily-life situations. Gaze directions are determined as 3D vectors connecting both the eyeball and the iris centers. Since the eyeball center and radius cannot be directly observed from images, the geometrical relationship between the eyeball centers and the facial features and eyeball radius (face/eye model) are calculated in advance. Then, the 2D positions of the eyeball centers can be determined by tracking the facial features. While conventional methods require instructing users to perform such special actions as looking at several reference points in the calibration process, the proposed method does not require such special calibration action of users and can be realized by combining 3D eye-model-based gaze estimation and circle-based algorithms for eye-model calibration. Experimental results show that the gaze estimation accuracy of the proposed method is 5° horizontally and 7° vertically. With our proposed method, various application such as gaze-communication robots, gaze-based interactive signboards, etc. that require gaze information in daily-life situations are possible.
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeHI-SIG Vol.10, No.6/IEICE-SIG-WITVol.108, No.332, pp.21-262008~
LectureUnrefereedOtherSingle-AuthorTomoko Yonezawa2008~
LectureUnrefereedOtherCo-authorShinji Abe;Akira Utsumi;Tomoko Yonezawa;Hirotake Yamazoe2008~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe (2008-CVIM-165), pp.9-142008~
Academic presentationUnrefereedOtherCo-author(2008-UBI-19), pp. 87-922008~
Academic presentationUnrefereedOtherCo-author MIRU2008pp. 1638-16432008~
Academic presentationUnrefereedOtherCo-authorMIRU2008pp. 1650-16552008~
Academic presentationUnrefereedOtherCo-authorMIRU2008 demonstrationpp. 1676-16772008~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Shinji Abe;Kenji Mase;Kiyoshi KogureSIG HCI-MUS (2008-05), pp.25-302008~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeHCS2007-73, (2008-03), pp.53-582008~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji Abe1B-12007/12~慶應大学
PapersPerceptual Continuity and Naturalness of Expressive Strength in Singing Voice based on Speech MorphingIn refereedAcademic JournalCo-authorTomoko Yonezawa;Noriko Suzuki;Shinji Abe;Kenji Mase;Kiyoshi KogureEURASIP Journal on Audio, Speech and Music ProcessingVol. 2007, Article ID 23807 (9 pages)2007/10/1~
International academic conferenceGaze-communicative Behavior of Stuffed-toy Robot with Joint Attention and Eye Contact based on Ambient Gaze-trackingIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeACM ICMI2007pp. 140-1452007~Nagoya, JAPANThis paper proposes a gaze-communicative stuffed-toy robot system with joint attention and eye-contact reactions based on ambient gaze-tracking. For free and natural interaction, we adopted our remote gaze-tracking method. Corresponding to the user's gaze, the gaze-reactive stuffed-toy robot is designed to gradually establish 1) joint attention using the direction of the robot's head and 2) eye-contact reactions from several sets of motion. From both subjective evaluations and observations of the user's gaze in the demonstration experiments, we found that i) joint attention draws the user's interest along with the user-guessed interest of the robot, ii) "eye contact" brings the user a favorable feeling for the robot, and iii) this feeling is enhanced when "eye contact" is used in combination with "joint attention." These results support the approach of our embodied gaze-communication model.
International academic conferenceGazecoppet: Hierarchical Gaze-communication in Ambient SpaceIn refereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeACM SIGGRAPH2007Poster J062007~SanDiego, USAThis research aims to naturally evoke human-robot communications in ambient space based on a hierarchical model of gazecommunication. The interactive ambient space is created with our remote gaze-tracking technology based on image analyses and our gaze-reactive robot system. A single remote camera detects the user's gaze in unrestricted situations by using eye-ball estimation. The robot's gaze reacts with both 1) "positive evocation" by direct eye contact with multimodal reactions and 2) "passive evocation" by indirect co-gazing (watching a common object or place) according to the user's conscious/unconscious gaze.
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeHCS2007-48, vol.107, No.308, pp.5-122007~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hirotake Yamazoe;Akira Utsumi;Shinji AbeME2007-83, pp.1-42007~
Academic presentationUnrefereedOtherCo-authorHirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji AbeIE2007-11, pp.1-62007~
PapersCross-modality of Expressive Strength in Gestural and Vocal Expression with PersonificationIn refereedAcademic JournalCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureHuman Interface Society JournalVol. 8, No. 3, pp. 43-522006/8/25~The paper aims to articulate the relationship between expressive strength
of both gesture and voice for embodied and personified interfaces. We test perception
of a puppet interface controlling singing-voice expression to empirically determine the
naturalness and strength of various combinations of gesture and voice. The results show:
1) the strength of cross-modal perception is affected by gestural expressions rather than
expressions of the singing voice, and 2) the suitability of cross-modal perception is affected
by expressive combinations between singing voice and gestures in personified expressions.
Finally we propose balancing of singing voice and gestural expression by expanding and
correcting the width of the expressive strength in singing voice.
PapersContinuous transformation of the singing voice expressions controlled by hand-puppet gestureIn refereedAcademic JournalCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureJournal of Acoustical Society of JapanVol.62, No.3, pp. 233-2432006/3/1~
International academic conferenceCrossmodal Coordination of Expressive Strength between Voice and Gesture for Personified MediaIn refereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureACM ICMI2006pp.43-502006~Banff, CANADAThe aim of this paper is to clarify the relationship between the expressive strengths of gestures and voice for embodied and personified interfaces. We conduct perceptual tests using a puppet interface, while controlling singing-voice expressions, to empirically determine the naturalness and strength of various combinations of gesture and voice. The results show that (1) the strength of cross-modal perception is affected more by gestural expression than by the expressions of a singing voice, and (2) the appropriateness of cross-modal perception is affected by expressive combinations between singing voice and gestures in personified expressions. As a promising solution, we propose balancing a singing voice and gestural expressions by expanding and correcting the width and shape of the curve of expressive strength in the singing voice.
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureSP2005-143, pp.25-302006~
International academic conferenceGradually Changing Expression of Singing Voice based on MorphingIn refereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureInterspeech2005pp.541-5442005~Lisbon, PORTUGALWe have developed a method for synthesizing a singing voice by gradually changing the musical expression based on speech morphing. This paper shows the advantages of this method in comparison with the approach of binary discrete transformation between two expressions, confirmed by statistical analyses of perception tests. In order to synthesize different expressional strengths of a singing voice, a "normal" (without expression) voice of a particular singer is used as the base of morphing, and three different expressions, "dark,""whispery" and "wet," are used as the target. Through our experiments, we confirmed i) the proposed morphing algorithm effectively interpolates the expressional strength of a singing voice, ii) an approximate equation of the perceptual sense can be used to calculate the morph ratio at a perceptually linear interval, and iii) our gradual transformation method can generate a natural singing voice from the interpolation of two different expressions.
International academic conferenceHandySinger: Expressive Singing Voice Morphing using Personified Handpuppet InterfaceIn refereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureNew Interfaces for Musical Interface 2005pp.121-1262005~Vancouver, CANADAThe HandySinger system is a personified tool developed to naturally express a singing voice controlled by the gestures of a hand puppet. Assuming that a singing voice is a kind of musical expression, natural expressions of the singing voice are important for personification. We adopt a singing voice morphing algorithm that e#ectively smoothes out the strength of expressions delivered with a singing voice. The system's hand puppet consists of a glove with seven bend sensors and two pressure sensors. It sensitively captures the user's motion as a personified puppet's gesture. To synthesize the di#erent expressional strengths of a singing voice, the "normal" (without expression) voice of a particular singer is used as the base of morphing, and three di#erent expressions, "dark,""whisper" and "wet," are used as the target. This configuration provides musically expressed controls that are intuitive to users. In the experiment, we evaluate whether 1) the morphing algorithm interpolates expressional strength in a perceptual sense, 2) the handpuppet interface provides gesture data at su#cient resolution, and 3) the gestural mapping of the current system works as planned.
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi KogureIPSJ-SIGHI-111, Vol. 2004, No. 115, pp. 13-202004~
Academic presentationUnrefereedOtherCo-authorNoriko Suzuki;Tomoko Yonezawa;Yasuhiro Katagiripp.409-4102004~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Noriko Suzuki;Kenji Mase;Kiyoshi Kogurepp.809-8102004~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hideyuki Mizuno;Masanobu Abepp.263-2642003~
PapersTactile Sensor-doll Interaction with Context-aware Music ExpressionsIn refereedAcademic JournalCo-authorTomoko Yonezawa;Brian Clarkson;Kenji MaseJournal of Information Processing Society of JapanVol.43, No.8, pp.2810-28202002/8/15~We present a sensor-doll capable of music expression as a sympathetic communication device. The doll has a computer and various sensors to recognize its own situation and the activities of the user. It also has the internal "mind" states to reflect different situated contexts. The user's multimodal interaction with the passive doll is translated into musical expressions that depend on the state of mind of the doll. Finally we evaluate the effect of music expression to the human communication using the tactile doll.
International academic conferenceMusically Expressive Doll in Face-to-face CommunicationIn refereedOtherCo-authorTomoko Yonezawa;Kenji MaseIEEE ICMI2002pp.417-4222002~Pittsburg, USAWe propose an application that uses music as a multimodal expression to activate and support communication that runs parallel with traditional conversation. We examine a personified doll-shaped interface designed for musical expression. To direct such gestures toward communication, we have adopted an augmented stuffed toy with tactile interaction as a musically expressive device. We constructed the doll with various sensors for user context recognition. This configuration enables translation of the interaction into melodic statements. We demonstrate the effect of the doll on face-to-face conversation by comparing the experimental results of different input interfaces and output sounds. Consequently, we have found that conversation with the doll was positively affected by the musical output, the doll interface, and their combination.
International academic conferenceAwareness Communications by Entertaining Toy Doll AgentsIn refereedOtherCo-authorKazuyuki Saito;Tomoko Yonezawa;Kenji MaseInternational Workshop on Entertainment Computing 2002pp.326-3332002~Makuhar, JAPANIn this paper, we propose a sensor-doll system that provides multiple users at remote locations with an awareness communication channel. A doll is used as the interface agent of the local user, and this agent is connected to a remote doll by local and/or wide area networks. The doll sends out information on the local ambient activities and the user’s intentional interactions to the remote agent and, at the same time, displays the received remote activities by adapting its presentation to the local context.
Musical sound expression is used to display the remote awareness, mixing the local response and remote activities. Music also provides an entertaining and sympathetic intimacy with the doll and eventually the remote user.
The design and implementation of the networked sensor-doll, equipped with various tactile sensors and a PC, are described in detail. We also discuss issues of awareness communication and give preliminary experimental results.
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hideyuki Mizuno;Masanobu Abepp.327-3282002~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Hideyuki Mizuno;Masanobu AbeVol.102, No.292, pp.17-222002~
Academic presentationUnrefereedOtherCo-authorHiroshi Saito;Tomoko Yonezawa;Shinmi Hattori;Kenji Mase2001-HI-96-3, Vol. 2001, No. 96, pp. 15-222002~
Artistic workUnrefereedOtherCo-authorKenji Mase;Naomi Takano;Sidney Fels;Tomoko Yonezawa2001/10/26~
International academic conferenceBody, Clothes, Water, and Toys - Media Towards Natural Music Expressions with Digital Sounds -In refereedOtherCo-authorKenji Mase;Tomoko YonezawaCHI2001 Workshop on New Interface for Musical Expression2001/4~Seatle, USAIn this paper, we introduce our research challenges for creating new musical instruments using everyday-life media with intimate interfaces, such as the self-body, clothes, water and stuffed toys. Various sensor technologies including image processing and general touch sensitive devices are employed to exploit these interaction media. The focus of our effort is to provide user-friendly and enjoyable experiences for new music and sound performances. Multi-modality of musical instruments is explored in each attempt. The degree of controllability in the performance and the richness of expressions are also discussed for each installation.
International academic conferenceContext-aware Sensor-doll as a Music Expression DeviceIn refereedOtherCo-authorTomoko Yonezawa;Brian Clarkson;Michiaki Yasumura;Kenji MaseACM SIGCHI2001pp.307-3082001/4~Seatle, USAWe present a sensor-doll capable of music expression as a sympathetic communication device. The doll is equipped with a computer and various sensors such as a camera, microphone, accelerometer, and touch-sensitive sensors to recognize its own situation and the activities of the user. The doll has its own internal "mind" states reflecting different situated contexts. The user's multi-modal interaction with the passive doll is translated into musical expressions that depend on the state of mind of the doll.
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Brian Clarkson;Michiaki Yasumura;Kenji MaseVol.2001, No.5, pp.19-202001/3~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Brian Clarkson;Michiaki Yasumura;Kenji Masepp. 11-202001~
Academic presentationUnrefereedOtherCo-authorKenji Mase;Brian Clarkson;Tomoko Yonezawa2001-HI-92-1, Vol. 2001, No. 3, pp. 1-82001~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Brian Clarkson;Michiaki Yasumura;Kenji Mase2001-HI-92-1, Vol. 2001, No. 3, pp. 17-242001~
International academic conferenceTangible Sound: Musical Instrument Using Tangible Fluid MediaIn refereedOtherCo-authorTomoko Yonezawa;Kenji Mase ICMC2000pp.551-5542000/8~Berlin, GERMANYIn this paper, we introduce " Tangible Sound," a musical instrument with a novel user interface that uses water. Like music, fluids cannot be physically grasped because their shape is constantly changing. We thus believe that water is a suitable interface for performing flowing music. We created a live, hands-on installation that uses the flow of water as an input medium to control the intuitively appealing feeling of musical flow. With our instrument, performers interact with water flowing from a faucet into a drain. We have developed a method for measuring the volume of the water flow and for generating music from this measurement. This installation leads to the novel concept of " Source and Drains" for programmable musical instruments. Finally, we consider the potential of this special interface for musical instruments and interactive arts.
PapersInteraction of Musical Instruments Using FluidIn refereedAcademic JournalCo-authorTomoko Yonezawa;Kenji MaseTransaction of Virtual Reality Society in JapanVol. 5, No. 1, pp. 755-7622000/3~Fluid water is a suitable interface to use in performing owing sound and music, since fluid and sound have common characteristics. For example, both media change shape over time and so they cannot be grasped. We use flowing water in a musical instrument installation. As water is frequently used in daily life as an essential resource among various fluid materials, such an instrument using water will be more friendly and
become an amenity in our life. Recently there have been many installations and interactive artworks using water, but they do not exploit the ability to use water itself as a medium. This research aims to find practical uses of fluid media for musical instruments. In addition, we propose a method to sense the amount of water flow for enjoying music. For judgement of user's actions, we use the changes of the upper flow from the faucet and in the lower flow toward drains as well as the difference between these two values. This configulation leads to a novel concept of "Source and Drains," which is also applicable to traditional wind instruments. Based on this concept, we introduce installations that are named \Tangible Sound" #1 and #2 as novel musical instruments that use water as input media.
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Kenji Masepp. 141-1422000/3~
Academic presentationIn refereedOtherCo-authorTomoko Yonezawa;Michiaki Yasumura;Kenji Masepp. 127-1342000~
Academic presentationUnrefereedOtherCo-authorKeiji Hirata;Osamu Ishikawa;Kenji Suzuki;Tomoya Sonoda;Yoichiro Taki;Shu Matsuda;Tomoko Yonezawa2000-MUS-38-1, Vol. 2000, No. 118, pp. 1-82000~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Kenji Mase2000-HI-89, Vol. 2000, No. 61, pp. 73-802000~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Kenji Mase講演論文集(2), pp. 61-622000~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Kenji Mase99-MUS-33, Vol.99, No.106, pp. 1-61999~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Michiaki Yasumura1999~
Academic presentationUnrefereedOtherCo-authorTomoko Yonezawa;Michiaki Yasumura1999~
Research Activities Overseas
- Foreign travelProject on Context-aware and self-management Systems Mar. 9,2012-Mar. 25,2012Germany
Participation in International Conferences
- Casemans2010 Sep.2010-Sep. 2010
- Casemans2011 Sep.2011-Sep. 2011
- International Conference of Human-agent Interaction (iHAI2013) Aug.2013-Aug. 2013
- International Computer Music Conference 2000 (ICMC2000) Aug.2000-Sep. 1,2000
- ACM SIGCHI 2001 Apr.2001-Apr. 2001
- IEEE (actually with ACM) International Conference on Multimodal Interfaces 2002 (ICMI2002) Oct.2002-Oct. 2002
- New Interface for Musical Expressions 2005 (NIME2005) May2005-May 2005
- Interspeech 2005 Sep.2005-Sep. 2005
- ACM International Conference on Multimodal Interfaces 2006 (ICMI2006) Nov.2006-Nov. 2006
- ACM International Conference on Multimodal Interfaces 2007 (ICMI2007) Oct.2007-Oct. 2007
- ACM SIGGRAPH 2007 Aug.2007-Aug. 2007
- IEEE IROS2008 Sep.2008-Sep. 2008
- Communication, Environment and Mobility Control by Gaze (COGAIN2008) Sep.2008-Sep. 2008
- ACM SIGGRAPH2008 Aug.2008-Aug. 2008
- Context-Awareness for Self-Managing Systems (CASEMANS2009) Apr.2009-Apr. 2009
- Pervasive 2009 Apr.2009-Apr. 2009
- IEEE IROS 2010 Oct.2010-Oct. 2010
- IEEE ICSPCC 2011 Sep.2011-Sep. 2011
- Ubicomp 2011 Sep.2011-Sep. 2011
- International Symposium of Wearable Computers (ISWC2013) Sep.2013-Sep. 2013
- Ubicomp 2013 Sep.2013-Sep. 2013
- IEEE IROS 2013 Nov.2013-Nov. 2013
- Human Computer Interaction International (HCII2014) Jun.2014-Jun. 2014
- Human Agent Interaction 2014 Oct.2014-Oct. 2014
- SCIS-ISIS 2014 Dec.2014-Dec. 2014
Courses Taught
- Virtual Communication Media Science
- Virtual Communication Media Science
- Phonetics
- Senior Seminars
- First Year Seminar
- Human-Agent Interaction
- Junior Seminars
- Sound Interaction(Lab.)
- Computer Science (Lab.)
- Human-Computer Interaction:Cognition,Media,and Culture
- Thesis Guidance
- Virtual Communication Media Science
- Virtual Communication Media Science
- Human-robot Interaction and Communication Design
- Virtual Communication Media Science
- Personal Information
- Research Activities
- Research Activities
- Community Service
- Courses Taught