中文字幕一区二区三区乱码,免费一级做a爰片性色毛片,毛片女人18片毛片免费二区,精品国产91久久久久久无码,亚洲AV永久无码动漫在线观看

位置:首頁(yè) > 服務(wù)中心 > tobii眼動(dòng)技術(shù)相關(guān)論文

tobii眼動(dòng)技術(shù)相關(guān)論文

tobii眼動(dòng)技術(shù)相關(guān)論文

軟件開(kāi)發(fā)領(lǐng)域論文

2015-07-19   總瀏覽:5457

A reference library of research papers where Tobii's eye trackers have been used or mentioned within the field of software development.

 

Agrafiotis, D., Davies, S. J. C., Canagarajah, N., & Bull, D. R. (2007). Towards efficient context-specific video coding based on gaze-tracking analysis. ACM Trans. Multimedia Comput. Commun. Appl.3(4), 4:1–4:15. doi:10.1145/1314303.1314307

 

 

 

Albanesi, M. G., Gatti, R., Porta, M., & Ravarelli, A. (2011). Towards semi-automatic usability analysis through eye tracking. In Proceedings of the 12th International Conference on Computer Systems and Technologies (pp. 135–141). New York, NY, USA: ACM. doi:10.1145/2023607.2023631

 

 

 

Aldana Pulido, R. (2012). Ophthalmic Diagnostics Using Eye Tracking Technology (M.Sc). Royal Institute of Technology, Stockholm. Retrieved from http://kth.diva-portal.org/smash/record.jsf?pid=diva2:506609

 

 

 

Aranyanak, I., & Reilly, R. G. (2013). A system for tracking braille readers using a Wii Remote and a refreshable braille display. Behavior Research Methods45(1), 216–228. http://doi.org/10.3758/s13428-012-0235-8

 

 

 

Bailly, G., Raidt, S., & Elisei, F. (2010). Gaze, conversational agents and face-to-face communication.Speech Communication52(6), 598–612. doi:10.1016/j.specom.2010.02.015

 

 

 

Bailly, G., Elisei, F., & Raidt, S. (2007). Virtual talking heads and ambiant face-to-face communication.NATO SECURITY THROUGH SCIENCE SERIES E HUMAN AND SOCIETAL DYNAMICS18, 302.

 

 

 

Bednarik, R., Kinnunen, T., Mihaila, A., & Fränti, P. (2005). Eye-Movements as a Biometric. In H. Kalviainen, J. Parkkinen, & A. Kaarna (Eds.), Image Analysis (pp. 780–789). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/11499145_79

 

 

 

Bekele, E., Young, M., Zheng, Z., Zhang, L., Swanson, A., Johnston, R., … Sarkar, N. (2013). A Step towards Adaptive Multimodal Virtual Social Interaction Platform for Children with Autism. In C. Stephanidis & M. Antona (Eds.), Universal Access in Human-Computer Interaction. User and Context Diversity (pp. 464–473). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-39191-0_51

 

 

 

Berman, J. M. J., Khu, M., Graham, I., & Graham, S. A. (n.d.). ELIA: A software application for integrating spoken language and eye movements. Behavior Research Methods2013, 1–10. doi:10.3758/s13428-012-0302-1

 

 

 

Beymer, D., & Russell, D. M. (2005). WebGazeAnalyzer: a system for capturing and analyzing web reading behavior using eye gaze. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (pp. 1913–1916). New York, NY, USA: ACM. doi:10.1145/1056808.1057055

 

 

 

Biedert, R., Buscher, G., & Dengel, A. (2010). The eyeBook – Using Eye Tracking to Enhance the Reading Experience. Informatik-Spektrum33(3), 272–281. doi:10.1007/s00287-009-0381-2

 

 

 

Blignaut, P., & Beelders, T. (2012). TrackStick: a data quality measuring tool for Tobii eye trackers. InProceedings of the Symposium on Eye Tracking Research and Applications (pp. 293–296). New York, NY, USA: ACM. doi:10.1145/2168556.2168619

 

 

 

Brown, A., Jay, C., & Harper, S. (2009). Audio presentation of auto-suggest lists. In Proceedings of the 2009 International Cross-Disciplinary Conference on Web Accessibililty (W4A) (pp. 58–61). New York, NY, USA: ACM. doi:10.1145/1535654.1535667

 

 

 

Bulbul, A., Koca, C., Capin, T., & Güdükbay, U. (2010). Saliency for animated meshes with material properties (pp. 81–88). Presented at the Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, ACM.

 

 

 

Carl Johan Gustavsson. (2010). Real Time Classification of Reading in Gaze Data (M.Sc). KTH Royal Institute of Technology.

 

 

 

Camilli, M., Nacchia, R., Terenzi, M., & Nocera, F. D. (2008). ASTEF: A simple tool for examining fixations.Behavior Research Methods40(2), 373–382. doi:10.3758/BRM.40.2.373

 

 

 

Chen, M., Yamada, S., & Takama, Y. (2010). Eye-tracking Analysis of User Behaviors in Document Similarity Judgment. Presented at the 24th Annual Conference of the Japanese Society for Artificial Intelligence (JSAI2010), 2G2-OS9-3.

 

 

 

Chen, M., Yamada, S., & Takama, Y. (2011). Investigating user behavior in document similarity judgment for interactive clustering-based search engines. Journal of Emerging Technologies in Web Intelligence3(1), 3–10.

 

 

 

Cowell, A., Hale, K., Berka, C., Fuchs, S., Baskin, A., Jones, D., … Fatch, R. (2007). Construction and Validation of a Neurophysio-technological Framework for Imagery Analysis. In J. A. Jacko (Ed.),Human-Computer Interaction. Interaction Platforms and Techniques (pp. 1096–1105). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-73107-8_120

 

 

 

Dalmaijer, E. S., Mathôt, S., & Stigchel, S. (2013). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods. doi:10.3758/s13428-013-0422-2

 

 

 

Davies, S. J. C., Agrafiotis, D., Canagarajah, C. N., & Bull, D. R. (2008). A gaze prediction technique for open signed video content using a track before detect algorithm. In 15th IEEE International Conference on Image Processing, 2008. ICIP 2008 (pp. 705–708). doi:10.1109/ICIP.2008.4711852

 

 

 

Duchowski, A. T. (2004). Hardware-accelerated real-time simulation of arbitrary visual fields. InProceedings of the 2004 symposium on Eye tracking research & applications (pp. 59–59). New York, NY, USA: ACM. doi:10.1145/968363.968376

 

 

 

Elhelw, M., Nicolaou, M., Chung, A., Yang, G.-Z., & Atkins, M. S. (2008). A gaze-based study for investigating the perception of visual realism in simulated scenes. ACM Trans. Appl. Percept.5(1), 3:1–3:20. doi:10.1145/1279640.1279643

 

 

 

Faro, A., Giordano, D., Spampinato, C., De Tommaso, D., & Ullo, S. (2010). An interactive interface for remote administration of clinical tests based on eye tracking. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 69–72). New York, NY, USA: ACM. doi:10.1145/1743666.1743683

 

 

 

Faro, A., Giordano, D., Pino, C., & Spampinato, C. (2010). Visual attention for implicit relevance feedback in a content based image retrieval. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 73–76). New York, NY, USA: ACM. doi:10.1145/1743666.1743684

 

 

 

Galdi, C., Nappi, M., Riccio, D., Cantoni, V., & Porta, M. (2013). A New Gaze Analysis Based Soft-Biometric. In J. A. Carrasco-Ochoa, J. F. Martínez-Trinidad, J. S. Rodríguez, & G. S. di Baja (Eds.),Pattern Recognition (pp. 136–144). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-38989-4_14

 

 

 

Giordano, D., Kavasidis, I., Pino, C., & Spampinato, C. (2012). Content based recommender system by using eye gaze data. In Proceedings of the Symposium on Eye Tracking Research and Applications(pp. 369–372). New York, NY, USA: ACM. doi:10.1145/2168556.2168639

 

 

 

González, G., López, B., Angulo, C., & de la Rosa, J. L. (2005). Acquiring Unobtrusive Relevance Feedback through Eye-Tracking in Ambient Recommender Systems (pp. 181–188). Presented at the Proceedings of the 2005 conference on Artificial Intelligence Research and Development, IOS Press.

 

 

 

Groen, W. B., Rommelse, N., de Wit, T., Zwiers, M. P., van Meerendonck, D., van der Gaag, R. J., & Buitelaar, J. K. (2012). Visual Scanning in Very Young Children with Autism and Their Unaffected Parents. Autism Research and Treatment2012. doi:10.1155/2012/748467

 

 

 

Hussain, Z., Pasupa, K., & Shawe-Taylor, J. (2010). Learning relevant eye movement feature spaces across users. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications(pp. 181–185). New York, NY, USA: ACM. doi:10.1145/1743666.1743711

 

 

 

Ishii, R., & Nakano, Y. I. (2010). An empirical study of eye-gaze behaviors: towards the estimation of conversational engagement in human-agent communication. In Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction (pp. 33–40). New York, NY, USA: ACM. doi:10.1145/2002333.2002339

 

 

 

Ishii, R., & Nakano, Y. I. (2008). Estimating User’s Conversational Engagement Based on Gaze Behaviors. In H. Prendinger, J. Lester, & M. Ishizuka (Eds.), Intelligent Virtual Agents (pp. 200–207). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-85483-8_20

 

 

 

Jakob de Lemos, Golam Reza Sadeghnia, Íris Ólafsdóttir, Ole Jensen, & Mai Drost Nielsen. (2008).Emotional Response Evaluation in Emotion Tool (tm) 2.0. Imotions Emotion Technology.

 

 

 

Kaatiala, J., Yrttiaho, S., Forssman, L., Perdue, K., & Leppänen, J. (2013). A graphical user interface for infant ERP analysis. Behavior Research Methods. doi:10.3758/s13428-013-0404-4

 

 

 

Kang, J. M., Ahmad, M. A., Teredesai, A., & Gaborski, R. (2007). Cognitively Motivated Novelty Detection in Video Data Streams. In V. A. P. MS & L. K. B. MS (Eds.), Multimedia Data Mining and Knowledge Discovery (pp. 209–233). Springer London. Retrieved from http://link.springer.com/chapter/10.1007/978-1-84628-799-2_11

 

 

 

Kardan, S., & Conati, C. (2012). Exploring Gaze Data for Determining User Learning with an Interactive Simulation. In J. Masthoff, B. Mobasher, M. C. Desmarais, & R. Nkambou (Eds.), User Modeling, Adaptation, and Personalization (pp. 126–138). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-31454-4_11

 

 

 

Kinnunen, T., Sedlak, F., & Bednarik, R. (2010). Towards task-independent person authentication using eye movement signals. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 187–190). New York, NY, USA: ACM. doi:10.1145/1743666.1743712

 

 

 

Komogortsev, O. V., & Khan, J. I. (2009). Eye movement prediction by oculomotor plant Kalman filter with brainstem control. Journal of Control Theory and Applications7(1), 14–22. doi:10.1007/s11768-009-7218-z

 

 

 

Kumar, M., Garfinkel, T., Boneh, D., & Winograd, T. (2007). Reducing shoulder-surfing by using gaze-based password entry. In Proceedings of the 3rd symposium on Usable privacy and security (pp. 13–19). New York, NY, USA: ACM. http://doi.org/10.1145/1280680.1280683

 

 

 

Leal Bando, L., Scholer, F., & Turpin, A. (2010). Constructing query-biased summaries: A comparison of human and system generated snippets (pp. 195–204). Presented at the Information Interaction in Context, ACM. Retrieved from http://researchbank.rmit.edu.au/view/rmit:13244

 

 

 

Li, H., Men, L., & Chen, J. (2008). A Method of the Extraction of Texture Feature. In L. Kang, Z. Cai, X. Yan, & Y. Liu (Eds.), Advances in Computation and Intelligence (pp. 368–377). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-92137-0_41

 

 

 

Liang, Z., Fu, H., Zhang, Y., Chi, Z., & Feng, D. (2010). Content-based image retrieval using a combination of visual features and eye tracking data. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 41–44). New York, NY, USA: ACM. doi:10.1145/1743666.1743675

 

 

 

Liang, Z., Fu, H., Chi, Z., & Feng, D. (2010). Refining a region based attention model using eye tracking data. In 2010 17th IEEE International Conference on Image Processing (ICIP) (pp. 1105–1108). doi:10.1109/ICIP.2010.5651804

 

 

 

Loboda, T. D., & Brusilovsky, P. (2008). Adaptation in the Context of Explanatory Visualization. In P. Dillenbourg & M. Specht (Eds.), Times of Convergence. Technologies Across Learning Contexts (pp. 250–261). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-87605-2_28

 

 

 

Modritscher, F. (2009). Semantic lifecycles: modelling, application, authoring, mining, and evaluation of meaningful data. International Journal of Knowledge and Web Intelligence1(1), 110–124. doi:10.1504/IJKWI.2009.027928

 

 

 

Nataraju, S., Balasubramanian, V., & Panchanathan, S. (2011). An Integrated Approach to Visual Attention Modeling for Saliency Detection in Videos. In L. Wang, G. Zhao, L. Cheng, & M. Pietikäinen (Eds.),Machine Learning for Vision-Based Motion Analysis (pp. 181–214). Springer London. Retrieved from http://link.springer.com/chapter/10.1007/978-0-85729-057-1_8

 

 

 

Nüssli, M.-A., Jermann, P., Sangin, M., & Dillenbourg, P. (2009). Collaboration and abstract representations: towards predictive models based on raw speech and eye-tracking data. InProceedings of the 9th international conference on Computer supported collaborative learning - Volume 1 (pp. 78–82). Rhodes, Greece: International Society of the Learning Sciences. Retrieved from http://dl.acm.org/citation.cfm?id=1600053.1600065

 

 

 

Ohmoto, Y., Ueda, K., & Ohno, T. (2006). Discrimination of Lies in Communication by using Automatic Measuring System of Nonverbal Information. In 9th International Conference on Control, Automation, Robotics and Vision, 2006. ICARCV ’06 (pp. 1–6). doi:10.1109/ICARCV.2006.345400

 

 

 

Pallez, D., Brisson, L., & Baccino, T. (2008). Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC (arXiv e-print No. 0803.3186). Retrieved from http://arxiv.org/abs/0803.3186

 

 

 

Paniagua, B., Green, P., Chantler, M., Vega-Rodríguez, M. A., Gómez-Pulido, J. A., & Sánchez-Pérez, J. M. (2009). Perceptually Relevant Pattern Recognition Applied to Cork Quality Detection. In M. Kamel & A. Campilho (Eds.), Image Analysis and Recognition (pp. 927–936). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-02611-9_91

 

 

 

Pichiliani, M. C., Hirata, C. M., Soares, F. S., & Forster, C. H. Q. (2009). TeleEye: An Awareness Widget for Providing the Focus of Attention in Collaborative Editing Systems. In E. Bertino & J. B. D. Joshi (Eds.),Collaborative Computing: Networking, Applications and Worksharing (pp. 258–270). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-03354-4_20

 

 

 

Pivec, M., Pripfl, J., & Trummer, C. (20050627). Adaptable E-learning by means of Real-Time Eye-Tracking. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2005,2005(1), 4037–4041. Retrieved from http://editlib.org/p/20711

 

 

 

Piyasirivej, P. (2005). Using a contingent heuristic approach and eye gaze tracking for the usability evaluation of web sites (prof). Murdoch University. Retrieved from http://researchrepository.murdoch.edu.au/261/

 

 

 

Porta, M. (2008). Implementing eye-based user-aware e-learning. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems (pp. 3087–3092). New York, NY, USA: ACM. doi:10.1145/1358628.1358812

 

 

 

Raidt, S. (2008). Gaze and face-to-face communication between a human speaker and an animated conversational agent—mutual attention and multimodal deixis. Grenoble Institute of Technology, Grenoble.

 

 

 

Raidt, S., Bailly, G., & Eliséi, F. (2006). Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User’s Attention to Points of Interest? In International conference on Language Resources and Evaluation (LREC) (pp. 2544–2549). Genoa, Italie. Retrieved from http://hal.archives-ouvertes.fr/hal-00366537

 

 

 

Ramamurthy, B., Lewis, B., & Duchowski, A. T. (2012). Eye Tracking to Enhance Facial Recognition Algorithms. Presented at the 30th ACM Conference on Human Factors in Computing Systems.

 

 

 

Shimotomai, T., Takahashi, H., & Omori, T. (2012). Model for viewing art. In 2012 Joint 6th International Conference on Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS) (pp. 117–120). doi:10.1109/SCIS-ISIS.2012.6505274

 

 

 

Simola, J., Salojärvi, J., & Kojo, I. (2008). Using hidden Markov model to uncover processing states from eye movements in information search tasks. Cognitive Systems Research9(4), 237–251. doi:10.1016/j.cogsys.2008.01.002

 

 

 

Špakov, O. (2008). iComponent-Device-Independent Platform for Analyzing Eye Movement Data and Developing Eye-Based Applications (Ph.D). University of Tampere.

 

 

 

Toker, D., Steichen, B., Gingerich, M., Conati, C., & Carenini, G. (2014). Towards facilitating user skill acquisition: identifying untrained visualization users through eye tracking (pp. 105–114). ACM Press. doi:10.1145/2557500.2557524

 

 

 

Waller, A., Menzies, R., Herron, D., Prior, S., Black, R., & Kroll, T. (2013). Chronicles: Supporting Conversational Narrative in Alternative and Augmentative Communication. In P. Kotzé, G. Marsden, G. Lindgaard, J. Wesson, & M. Winckler (Eds.), Human-Computer Interaction – INTERACT 2013 (pp. 364–371). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-40480-1_23

 

 

 

Weibel, N., Fouse, A., Emmenegger, C., Kimmich, S., & Hutchins, E. (2012). Let’s look at the cockpit: exploring mobile eye-tracking for observational research on the flight deck. In Proceedings of the Symposium on Eye Tracking Research and Applications (pp. 107–114). New York, NY, USA: ACM. doi:10.1145/2168556.2168573

 

 

 

Yarrington, D., & McCoy, K. (2008). Creating an automatic question answering text skimming system for non-visual readers. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 279–280). New York, NY, USA: ACM. doi:10.1145/1414471.1414537

 

 

 

Yoshida, K., Takahashi, S., Ono, H., Fujishiro, I., & Okada, M. (2010). Perceptually-Guided Design of Nonperspectives Through Pictorial Depth Cues (pp. 173–178). Presented at the Computer Graphics, Imaging and Visualization (CGIV), 2010 Seventh International Conference on, IEEE.

 

 

 

Zeng, X., & Pei, H. (2012). Human-Computer Interaction in Ubiquitous Computing Environments. In C. Liu, L. Wang, & A. Yang (Eds.), Information Computing and Applications (pp. 628–634). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-34041-3_87

 

 

 

Zhang, Y., & Hornof, A. J. (n.d.). Improving eye tracking accuracy with probable fixation locations. Retrieved from http://130.203.133.150/viewdoc/summary?doi=10.1.1.174.3537

 

 

 

Zhang, H., Fricker, D., & Yu, C. (2010). A Multimodal Real-Time Platform for Studying Human-Avatar Interactions. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent Virtual Agents (pp. 49–56). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-15892-6_6

 

 

 

Zhang, Y., Fu, H., Liang, Z., Chi, Z., & Feng, D. (2010). Eye movement as an interaction mechanism for relevance feedback in a content-based image retrieval system. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 37–40). New York, NY, USA: ACM. doi:10.1145/1743666.1743674

 

 

 

Zhang, Y., Zhao, X., Fu, H., Liang, Z., Chi, Z., Zhao, X., & Feng, D. (2011). A Time Delay Neural Network model for simulating eye gaze data. Journal of Experimental & Theoretical Artificial Intelligence,23(1), 111–126. http://doi.org/10.1080/0952813X.2010.506298

 

 

 

ScienceDirect Full Text PDF. (n.d.). Retrieved from http://www.sciencedirect.com.proxy.kib.ki.se/science?_ob=MiamiImageURL&_cid=272183&_user=11467531&_pii=S1389041708000132&_check=y&_origin=article&_zone=toolbar&_coverDate=31-Oct-2008&view=c&originContentFamily=serial&wchp=dGLzVlS-zSkWb&md5=3eeb26a9894e0d7f65d25dff9ce09bd6&pid=1-s2.0-S1389041708000132-main.pdf

 

 

 

HAL Snapshot. (n.d.). Retrieved from http://hal.archives-ouvertes.fr/hal-00366537

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-87605-2_28

 

 

 

IEEE Xplore Abstract Record. (n.d.). Retrieved from http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4150366&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4150366

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-1-84628-799-2_11

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-92137-0_41

 

 

 

ScienceDirect Snapshot. (n.d.). Retrieved from http://www.sciencedirect.com.proxy.kib.ki.se/science/article/pii/S1389041708000132

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com.proxy.kib.ki.se/article/10.3758/BRM.40.2.373

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-540-85483-8_20

 

 

 

IEEE Xplore Abstract Record. (n.d.). Retrieved from http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4711852&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4711852

 

 

 

Full Text PDF. (n.d.). Retrieved from http://link.springer.com/content/pdf/10.1007%2Fs11768-009-7218-z.pdf

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/article/10.1007/s11768-009-7218-z

 

 

 

0803.3186 PDF. (n.d.). Retrieved from http://www.arxiv.org/pdf/0803.3186.pdf

 

 

 

arXiv.org Snapshot. (n.d.). Retrieved from http://arxiv.org/abs/0803.3186

 

 

 

Full Text PDF. (n.d.). Retrieved from http://link.springer.com/content/pdf/10.1007%2F978-3-642-15892-6_6.pdf

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-15892-6_6

 

 

 

Snapshot. (n.d.). Retrieved from http://researchbank.rmit.edu.au/view/rmit:13244

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743675&type=pdf

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743674&type=pdf

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743711&type=pdf

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743684&type=pdf

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743712&type=pdf

 

 

 

MetaPress Snapshot. (n.d.). Retrieved from http://www.inderscience.com/offer.php?id=27928

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/11499145_79

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-34041-3_87

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-03354-4_20

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1057055&type=pdf

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/article/10.1007%2Fs00287-009-0381-2?LI=true

 

 

 

ACM Full Text PDF. (n.d.). Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1743683&type=pdf

 

 

 

Snapshot. (n.d.). Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-02611-9_91
人因工程技術(shù)研究院
國(guó)際工效學(xué)協(xié)會(huì)(IEA)國(guó)際獎(jiǎng)項(xiàng) HFE Award全國(guó)人因與工效學(xué)創(chuàng)新大賽 CES-Kingfar青年學(xué)者聯(lián)合研究基金 津發(fā)科研支持計(jì)劃 “津發(fā)杯”第十四屆全國(guó)大學(xué)生交通科技大賽
教育部協(xié)同育人項(xiàng)目
最新通知 項(xiàng)目簡(jiǎn)介 申報(bào)指南 申報(bào)書(shū)下載
產(chǎn)學(xué)研合作
校企聯(lián)合共建實(shí)驗(yàn)室 聯(lián)合申報(bào)科研課題 科研項(xiàng)目合作 專業(yè)共建 師資培訓(xùn) 實(shí)習(xí)實(shí)訓(xùn) 學(xué)分置換
工效學(xué)卓越研究工程
會(huì)議通知 專家介紹 CES-津發(fā)基金 科研支持計(jì)劃 會(huì)議注冊(cè) 資料分享
國(guó)際人機(jī)環(huán)境系統(tǒng)工程大會(huì)
MMESE最新通知 重要文獻(xiàn) 關(guān)于MMESE 出版動(dòng)態(tài) MMESE重點(diǎn)實(shí)驗(yàn)室
研究案例
嬰幼兒發(fā)展心理學(xué)研究 認(rèn)知心理學(xué)研究 心理語(yǔ)言學(xué)與閱讀研究 培訓(xùn)與教學(xué)研究 體育運(yùn)動(dòng)研究 廣告研究 可用性與人機(jī)交互研究 包裝設(shè)計(jì)與購(gòu)物者研究
研究論文
tobii眼動(dòng)技術(shù)相關(guān)論文 WorldViz虛擬現(xiàn)實(shí)技術(shù)相關(guān)論文
科研視頻
人機(jī)環(huán)境同步技術(shù) 虛擬現(xiàn)實(shí)技術(shù) 眼動(dòng)追蹤技術(shù) 心理與行為研究 消費(fèi)與決策研究 交通與人因研究 建筑與人因研究 設(shè)計(jì)與人因研究 安全與人因研究
下載中心
文件下載 用戶手冊(cè)與說(shuō)明書(shū) 視頻資料 項(xiàng)目案例 解決方案

QQ客服:

 4008113950

服務(wù)熱線:

 4008113950

公司郵箱:Kingfar@kingfar.cn

微信聯(lián)系:

 13021282218

微信公眾號(hào)

中文字幕一区二区三区乱码,免费一级做a爰片性色毛片,毛片女人18片毛片免费二区,精品国产91久久久久久无码,亚洲AV永久无码动漫在线观看