224 BIBLIOGRAPHY Abul. A. L.. R. Alhajj. F. Polat & K. Barker. 2003. Cluster Validity Analysis Using Subsampling. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, IEEE Press, Vol. 2: pp. 1435-1440 (2003). Adam, B. & C. Richard. 2004. "Application of Self-Organizing Maps to Clustering of High-Frequency Financial Data". The Australasian Workshop on Data Mining and Web Intelligence (A WDM & W12004). Adela. B., L. Ion. & O. V. Simona. 2009. "Public Institutions' Investments with Data Mining Techniques". WSEAS TRANSACTIONS on COMPUTERS Vol 8(4), pp. 589-598. Adelson-Vel'ski`I. G., & E. M. Landis. 1962. An algorithm for the organization of information. In Proceedings of the USSR Academy of Sciences, volume 145, pages 263-266.1962. In Russian, English translation by Myron J. Ricci in Soviet Doklady. 3: 1259-1263,1962. Alistair, M., R. Greg, & C. Matthew. 2002. Combining and comparing clustering and layout algorithms. Submitted to Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization (2002). University of Glasgow. Alexander, F. & G. V. Aºjan . 2006. A Two-Step Hierarchical Algorithm for Model- Based Diagnosis. American Association for Artificial Intelligence. Aaai Conference On Artificial Intelligence Proceedings of the 21st national conference on Artificial intelligence. Vol. 1. pp. 827-833. Allan. B., 0. Rafail, R. Yuval. 2004. Subquadratic approximation algorithms for clustering problems in high dimensional spaces. Machine Learning 56: 153-167. 225 Alsayed, A.. S. Eike & S. Gunter. 2008. A Schema Matching-based Approach to XML Schema Clustering. Proceedings of 11WAS2008 November 24-26,2008. Linz, Austria. And. J. K. 2009. Data clustering: 50 years beyond K-means. International Conference on Pattern Recognition (ICPR), Tempa, FL, December 8,2O0ä, pp. 651-666. Anitha. AS.. J. Akilandeswari & B. Sathiyabhama. 2011. "A survey on partition clustering algorithm". International Journal of Enterprise Computing and Business Systems, Vol. 1(1) January 2011. Online: littp: //www. ijecbs. coni. Ashok, S.. Z. Jieping, P. Robert, & M. A. Richard. 2009. "A modified hyper plane clustering algorithm allows for efficient and accurate clustering of extremely large datasets". Journal of oxford Vol 25(9). pp. ] 152-1157. Bariani, M.. R. Cucchiara, P. Mello, & M. Piccardi. 1997. Data Mining For Automated Visual Inspection. Babar, M. A., Winkler, D., & Biffl, S. (2007). "Evaluating of Usefulness and Ease of Use of a Groupware Tool for the Software Architecture Evaluation Process". Paper Presented at the Empirical Software Engineering and Measurement, 2007. ESEM 2007. First International Symposium Madrid, Spain. Behrouz, M-B., K. A. Deborah, K. Gerd, & P. F. William. 2003. Predicting Student Performance: An Application of Data Mining Methods with the Educational Web -Based System LON-CAPA. Proceedings of 33"i ASEE/IEEE Frontiers in Education Conference, pp. 5-8. Ben, S. 1992. Tree Visualization with Tree-Maps: 2-D Space- Filling Approach, " ACM Transactions on Graphics, vol. 11, pp. 92- 99. 226 Ben. S. 2001. Inventing Discovery Tools: Combining Information Visualization with Data Mining. Proceedings of Discovery Science 2001. Lecture Notes in Computer Science. pp. 17-28. Ben. S. & P. Catherine. 2005. Designing the User Interface: Strategies for effective human-computer interaction, Fourth Edition, Addison-Wesley. Bhadran , V., R. C. Roy, & R. Gopikakumari, R. 2008. Visual representation of 2-D DFT in terms of 2X2 Data: APattern analysis. In Proceedings of the International Conference on Computing, Communication and Networking (ICCCN' 08). Bo, WV, K.. B. A. Thomas. & S. Yakup. 1997. Neural Network Application in Business: A Review and analysis of the Literature (1988-1995). Decision Support System. 19, pp. 301-320. Boriana NI. L. & C. M. Marcos. 2002. O-Cluster: Scalable Clustering of Large High Dimensional Data Sets. Brian. J. & S. Ben. I991. Trcemaps: A Space-filling Approach to the Visualization of Hierarchical Information". IEEE Visualization'91, San Diego. CA, 1991, pp. 284-29). Charu. A. C.. H. Jiawei. W. Jianyong & Y. S. Philip. 2004. A framework for projected clusterin_ of high dimensional data streams. In: Proceedings of Very Large Datahases Conference (VLDB). Chris, J. N. & D. B. Roger. 2001. Tracking multiple sports players through occlusion, congestion and scale. In 12'x' British Machine Vision Conference, BMVCOI, pp 93- 102, Manchester. UK. September. Claudio. C.. 0. Stanislaw . R. Giovanni & W. Dawid. 2009. A Survey of Web Clustering Engines. ACM Computing Surveys. Vol. 41( 3). 227 Couturier. 0.. V. Dubois. T. Hsu, & E. M. Nguifo. 2008. Optimizing Occlusion Appcaranccs In 3D Association Rules Visualization. International IEEE Conference "Intelligent Systems". Christopher. W. D. & Justin. H. G. ? 000. Engineering Psychology and Human Performance. Prentice-Hall. Dan. J.. M. K. Philip. & J. K. Anil. 1998. Large-Scale Parallel Data Clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998.8. Vol. 20(8), pp. 871-876. D'andrade. R. 1975. U-Statistic Hierarchical Clustering. Psychometrika. pp. 4: 58-67. Daniel. K. A. 2002. Information visualization and visual data mining. IEEE Transactions On Visualization And Computer Graphics, Vol 8(1). pp. 1-8. Daniel. Ni. R. & C. Dianne. 2000. Visualization of data. Current Opinion in Biotechnology 1 1.1 (February), pp. 89-96. Daniel. K., A. Gennady. F. D. Jeante, J. G. Carsten & M. Guy. 2008. on Visual analytics: Definition. process. and challenges. In Information Visualization. pp. 154-175. David, H. 1998. Data Mining: Statistics and More. The American Statistician, 52: 2 pp. I1? -11S. David, E. 2000. Fast hierarchical clustering and other applications of dynamic closest pairs. J. Experimental Algorithmics 5(1): PP. 1-23, June 2000, http: //ww\\-.. jca. acm. org/2000/EppstcinDynamic/, arXiv: cs. DS/9912014. Davis, F. D (1989). Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340. 228 Defu, Z., J. Qingshan. & L. Xin. 2005. Application of Neural Networks in Financial Data Mining. World Academy of Science, Engineering and Technology, pp. 136-139. Dharmesh. M. NI. & N. T. Lan. (2006. August 20-23). Visual Data Mining using Principled Projection Algorithms and Information Visualization Techniques. ACM Research Track Poster, pp. 643-648. Dick. N. 2002. Pre-empting user questions through anticipation: data mining FAQ Sts. In Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on enablement through technology pp. 101-109. New York: ACM Press. Doddi. S.. A. Marathe. S. S. Ravi, & D. C. Toney. 2002. Discovery of Association Rules in Medical Data. Online: http: //www. c3. lanl_gov. Du. Z.: l F. Lin. 2005. A novel parallelization approach for hierarchical clustering Parallel Computing 31 (2005). pp. 523-527. Edelstein. H. ( 1996, Jan-8). Mining Data Warehouses" Information Week. Elias. P.. G. Werner. & W. Gerhard. 2003. Visualizing Changes in the Structure of Data for Exploratory Feature Selection. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining (SIGKDD '03). Aucust 24-27.2003, Washington. DC, USA. pp. 157-166, Ernst. K., W. Huuh & W. Jarke. 2001. Botanical Visualization of Huge Hierarchies". InfoVis 2001: IEEE Symposium on Information Visualization, San Diego. CA. 2001, pp. 87-94. FatCat Programmer. 2010. Balanced Binary Search Tree (BST) (Search. Delete, PrintlnOrder. PrintPreErder, PrintPostOrder, DepthFirst, BreadthFirst. ý-)9 BalanceTree). littn: //vw-ww, vv-. codcproicct. com/KB/collections/BinarV, SearcliTree. aspx. Frank. IT. & W. J. Jacke. 2002. Cushion Treemaps: Beanurees: Compact Visualization of Large Hierarchies". ]nfoVis 2002. Boston, MA. pp. 93-100. Frank. T. NI., M. Ulrich. & V. Oliver. 1995. Short Term Prediction of Sales in Supermarkets. Proceedings ICNN'95, IEEE. 1995.2. pp. 1028-1031. Fangfany F. 2005. The Application of Visualization Technology on E-commerce Data Mining. Second International Symposium on Intelligent Information Technology Application. pp. 563-566. IEEE Computer Society. Geor. Johnxm. S. C. 1907. Hierarchical Clustering Schemes. PsychomeU-ika. pp. 241-254. . Jowl'. J. ýý, P. Slavik. 2004. XML Visualization Using Tree Rewriting. Proceedings of the 20th spring conference on Computer graphics. ACM. New York. Jon. P. A. & S. Russell. 1995. A system for improving distance and large-scale classes. In Proceedings of the 6th annual conference on the teaching of computing and the 3rd annual conference on Integrating technology into computer science education. New York: ACM Press. lull-r. Z. I95'. Clustering of Large Data Sets. Chemometrics Research Studies Ser. Research Studies Press. Chichester. Jose. P.. C. Etienne. B. Francois & T. Monique. 2004. Data mining for acti\'itý- CS11, aLtion 111 video data. Kate. S. A. & G. N. D. Jatinder. 2000. Neural Networks in Business: Techniques and Applications týor the Operations Researcher. Computer & Operations Research. 27, pp. 1023-1044. Khalecl. A.. R. tianiaY. .ýS. Vineet. 1997. An Efficient K-Means Clustering ; ýIgýýrithm, htth: /hý ull. cdu/_ranka/. 1997. 234 Ke-Bing. Z. 2007. Visual Cluster Analysis in Data Mining. (PhD Thesis). Macquarie l, ni\ ersiIN IN. Kcke. C. & L. Ling. 2009. HE-Tree: it framework for detecting changes in clustering structure for catecorical data streams. The VLDB Journal. pp. 1241-1260. Kcri . P. E. 2001. Managing and Using Information Systems: A Strategic Approach. New York. Wiley. 2001: John Wiley & Sons, Inc. La'kari. L. C.. G. C. Melctiou. D. K. Tasoulis. & M. N. Vrahatis. 2003. Data Mining and CrvhtolOCV. Linda. T.. &, C. John. 2001. Enhancing learning environments through solution-based knowledge discovery tools: Forecasting for self-perpetuating systemic reform. JSET E Journal, 16,4. Retrieved October 17, htth: //j set. unI v. cdu/ I 6.4T/ttsantis/l irst. lltI1 I. LeLiendre. P. 1998. Numerical Ecology. Elsevier Science. 2003, from 1-collard. K. N-, R. J. Pettey. 1987. "Clustering by means of Medoids, " in Statistical Data Analysis Based on the LI-Norm and Related Methods, edited by Y. Dodge. North-Holland, pp. 405-416. Leonard. K. & R. J. Pettcr. 1990. Finding Groups in Data: an Introduction to Cluster : \nalý. ýis John \Vhile &. Sons. Lcuný. 1'.. J. Zhaný. & Z. Xu. 2000. Clustering by scale-space filters. IEEE Trans. Pattern Anal. 1loch. Inicll 22( 12). pp. 1396-1410. I. iu. L. &, 1.7_hi Qinýý 2004. A method of choosing the initial cluster centers, Computer Enginceriii and Applications, pp. 179-180. 23S 1lacQuccn.. 1. B. 1967. Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. Berkeley. University of California Press. pp, 281-297. Mlaiko. Y.. 1. Takavuki. S: Y. Fumiyoshi. 2008. Visualization and Level-of-Detail Control for Multi-Dimensional Bioactive Chemical Data. l2th International (ontcmncc information Visual isation. pp. I 1- 16. \lannta. J. & P. Nitika. 2007. Data Mining and its Scope. Proceeding of COLT, National conference in Challenge & Opportunities in Information Technology. Nlanasi. . l. N. 2003. Parallel K-Means Algorithm on Distributed Memory Multiprocessors. \laria. A. L. Z. R. Osnuur. & C. Alexandru. 2001. Application of Data Mining Technolo pies for Medical Image Classification. Proceedings of The Second International Workshop on Multimedia Data (MDM/KDD'2001). Maria. C.. O. D. Ferreira de Oliveira L. Haim. 2003. From visual data exploration to visual data minim: a survey. IEEE Transactions on Visualization and Computer Graphics. vol. 9(3). pp. 378- 394. Margaret. D. H. 200 3. Data mining introductory and advanced topics. Upper Saddle Riv cr. NJ: Pearson Education. Inc. Maw', F.. N. Matcja. S: N. Bojan. 2005. I-lierarchical Clustering with Concave Data Sels. Vol. 2. No. 2.2005, pp. 173-193. llrrriam 'ehstr Online Dictionary. ? 008. Cluster analysis. htth: /Aý ýý ýý . merriamýýchster-online. com. , \liChael, S. Gordon. 2000. Mastering Dam Mining. New York, N. Y.: \\'ilýý. 236 Mihael. A. 2001. Visual data mining with pixel-oriented Visualization techniques. Proc. Work. Vis Data Mining. Mei. L.. L. Guanling. L. C. Wang. & S. Anand. 2006. PENS: An Algorithm for Density-Based Clustering in Peer-to-Peer Systems. INFOSCALE '06. Proceedings of' the First International Conference on Scalable Information Systems. May ? 9-June 1 2006. Hong Kong. Mlihacl. A.. E. Martin. & K. P. Hans. 2000. Towards an Effective Cooperation of the Computer and the User for Classification. Proceedings of. ACM SIGKDD Intcrnational Conference On Knowledge Discovery & Data Mining (KDD 2000). Boston. MA. 179-188 (2000). \lohanunad. H.. N. Robert. N1. Sarah. 2009. An Approach to Data Extraction and Visualization for A\ fireless Sensor Networks. Eighth International Conference on Networks. Moreno. J. E.. 0. Castillo. J. R. Castro, L. G. Martinez & P. Melia. 2007. Data Minis-, for extraction of fuzzy IF-THEN rules using Mamdani and Takagi- Sugcno-Kan` FIS. Engineering Letters vol. 15. (1): pp. 82-88. 19ucha. II-J. & H. Sofyan. 2003. Cluster Analysis. http: //wwwwvww. xplore- tat. de/tutorials/clustnode3. html, 15.08.2004. Murray W'.. I\9 Z. Zagros. J. Patrick James. 2004. Identifying national types: A cluster analysis ofI politics. economics and conflict. Journal of Peace Research. 5, pp. 607-623. Nacmich. D. &- 13. R. Mohammad. 2005. A New Analysis Model for Data Mining Processes in III, -her Educational Systems. MMU International Symposiuºu on Information and Communications Technologies, July 7 -9,2005. 238 Raza. A. S. S. 2001. Knowledge Management in Healthcare: Towards Knowledge- Driven Decision-Support Services. International Journal of Medical Informatics, 63. pp. 5-18. Robert. S. 2001. Information Visualization. Addison-Wesley. Roy, W. P.. S. Brandon, L. Y. Qing, A. F. Fazel. V. J. Julio. L. Lying, L. Boleshm. 2004. Data Mining of Gene Expression Changes in Alzheimer Brain. Artificial Intelligence in Medicine. 2004,31.1)1). 137-154. Root. R. AV & S. Draper. 1983. Questionnaires as a Software Evaluation Tool. Proceedings of the CHI'S3 Conference on human Factors in Computing Systems, Bosten MA. USA, pp. 83-87. Richard. D. O. & H. E. Peter. 1973. Pattern classification and scene analysis. John Wiley & Sons. New York. Richard, G. F&F. A. Behrouz. 2001. Data Structures: A Pseudocode Approach With C++, Pacific Grove, CA: Brooks/Cole. pp 339, ISBN 0-534-95216-X. Ruochen. L.. S. Zhengchun, & J. Licheng. 2009. Gene Transposon Based Clonal Selection Algorithm for Clustering. Retrieved GECCO'09. July 8-12.2009. Montreal. Quebec. Canada. Ryan. B. S. J. d. 2008. (in press) Data Mining for Education. To appear in McGaw. B.. Peterson. P. Rygielski, C.. J. C. Wang. & D. C. Yen. 2002. Data mining techniques for customer relationship management. Technology in Society, 2002.24, pp. 483-502. Salazar, A.. J. Gosalbez, I. Bosch, R. Miralles, & L. Verga. 2004. A Case Study Of Knowledge Discovery On Academic Achievement, Student Desertion And Student Retention. 239 San joy, D. & L. M. Philip. 2010. Performance guarantees for hierarchical clustering. Sc. G. J. & W, M. Sholoni. 2001. Advances in Predictive Models for Data Mining, Pattern Recognition Letters, 22, pp. 55-61. Seeng , J. M. 2008. "An Interactive, 3D Visual Exploration Tool for Undirected Relationships". Tenth IEEE International Symposium on Multimedia. December. pp. 460-467. Sri, W. K., B. Vasudha. & K. Harleen. ? 006. THE IMPACT OF DATA MINING TECHNIQUES ON MEDICAL DIAGNOSTICS. Data Science Journal. October 2006,1)1). 119-126. Shu. L. H. 2003. Knowledge management technologies and applications - literature review from 1995 to 2002. Expert Systems with Applications, Vol 25(2), pp. 155-161. Stephen, M. A., J. T. Warren, & B. E. Stephen. 1999. Application of Data Mining to Intensive Care Unit Microbiologic Data. The University of Alabama at Birmingham, Birmingham. Alabama, USA. Stephen S. L. & F. B. Aaron. 1995. Visual tracking using closedworlds. In Proceedings of the Fifth International Conferenceon Computer Vision ICCV '95, pp. 672-678, MIT. Cambridge. MA. June 20-23. Stuart, M., H. Yulan, & L. Kecheng. 22009. An Empirical Framework for Automatically Selecting the Best Bayesian Classifier. Proceedings of the World Congress on Engineering ? 009 Vol I WCE 2009, July 1-3,2009, London, U. K. Sunita. K. 2008. Visual Data Mining. Proceedings of 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT IET, Mandl Gobindgarh. March 29,2008. 240 Sung. H. H.. B. M. Sung. & P. C. Sang. 2002. Customers Time-Variant Purchase Behavior and Corresponding Marketing Strategies: An Online Retailers Case, Computer and Industrial Engineering, 43, pp. 501-520. Sudipto. G.. R. Rajcev. & S. Kyuseok. 1998. An efficient clustering algorithm for large databases. In Proceedings of SIGMOD. pp. 73-84. June 1998. Sudipto. G.. R. Rajeev, & S. Kvuseok. 1999. "ROCK: A Robust Clustering Algorithm fur Categorical Attributes". In the Proceedings of the IEEE Conference on Data Engineering. Sudipto. G.. N1. Adam. N1. Nina. M. Rajeev & 0. Liadan. 2003. Clustering data streams: Theory and practice. IEEE Trans. Knowl. Data Eng. 15. Tian. Z.. R. Raghu. L. Miron. 1996. BIRCH: An efficient data clustering method for very large databases. In Proc. 1996 ACM-SIGMOD Int. Conf. Management of Data (SIGMOD'96). 103-114. Tian. Z.. R. Raghu. L. Miron. 1997. BIRCH: An efficient data clustering method for very lame databases. Data Mining and Knowledge Discovery1 (2). 141-182. Usama, F.. P-S. Gregory ö: S. Padhraic. (1996, Fall). "From data mining to knoýý ledge discovery in databases". Al Magazine, 17(3), pp. 37-54. L`s una. F.. G. Georges & W. Andreas. 2002. Information Visualization in Data Mining and Knowledge Discovery. Morgan Kaufmann Publishers. Ware. C. 2004. [rnformatioQ VisuafizatiOn: Perception for Design, Morgan Kaufmann. Wei(-, uo. H.. W. JinfenLy & S. L. Shili. 2006. Visual Exploratory Data Analysis of Traffic Volume. pp. 95- 703. 24! Venkatesh. G.. R. Raghu, G. Johannes, P. Allison ,&F. James. 1999. Clustering Large Datasets in Arbitrary Metric Spaces. ICDE 1999, pp. 502-51 1. Vida, T. & J. H. George. 1998. Data Mining and Statistics in Medicine: An Application in Prostate Cancer Detection. In JSM98, the Proceedings of the Joint Statistical Meetings, Section on Physical and Engineering Sciences. Vi' eýgas. F. B.. & M. Wattenberg. 1007. Artistic data visualization: Beyond visual anal ytics. In D. Schuler, editor. Artistic Data Visualization: Beyond Visual Analytics, volume 4564 of Lecture Notes in Computer Science. pages 152-191. Springer Berlin / Heidelberg, ? 007. Yangyang. L. S. Hongzhu, G. Maoguo, and S. Ronghua. 2009. Quantum-Inspired Evolutionary Clustering Algorithm Based on Manifold Distance. Retrieved GEC'09, June 12-14,2009, Shanghai, China. Yaniv. L., P. Elon. F Menachem and L. Michal. 2005. Efficient algorithms for exact hierarchical clustering of huge datasets: Tackling the entire protein space. Vol. 00 (00), pp. I -10. Yang. S.. Y. Jinping. H. Yanli, & X. Weidong. 2008. An Improved Multivariate Data Visualization Technique. Proceedings of the 2008 IEEE International Conference on Information and Automation June 20 -23,2008, Zhangjiajie, China. Yuru. 11'., L. Jiafeng, L. Guojun, T. Xianglong & L. Peng. 2002. Observation and analysis of large-scale human motion. Human Movement Science, pp 29.5-311. Yiming, M., L. Bing. W. K. Ching, Y. S. Philip, L. M. Shuik. ? 000. Targeting The Right Students Using Data Mining. In Proceedings Of The Sixth ACM SIGKDD International Conference On Knowledge Discovery And Data Mining. 242 Zhang. Y. Z.. M. H. Cheng, Mang, & F. X. Wang. 2006. An Approach on the Data Structure for the MaU-ix Storing Based on the Implementation of Agglomerative Hierarchical Clustering Algorithm, Computer Science. 2006(1). pp. 14-17. (http: //www. asc. com. jo/). Pilot Software V'Vhitepaper. Pilot Software. 1998). Bellinger. G.. Castro, D.. and Mills, A. Data. information and knowledge. http: //w v ýý . systems-thinking. org/dikW/dikw. htn1. 243 APPENDIX A QUESTIONNAIRE .! rl lJ ýý-NU"yl=. y ý: rý`ýý ý-. ý ý ý` 1'. ALGORITHM DEVELOPMENT OF BIDIRECTIONAL AGGLOMERATIVE HIERARCHICAL CLUSTERING USING AVL TREE WITH VISUALIZATION Doctor of philosophy in science and technology Hussain Mohammad Yousef Abu Dalbouh Assoc. Prof. Dr. Norita Md Norwawi Faculty of Science and Technology Ulllve_rsltl Sallls Islam Malaysia hussainnldalbouh@yahoo. com norlta@ IISIIII. cdLI. llly Contact: 0172962017 About this questionnaire I am a PhD student at the Faculty of Science and Technology, Islamic Science University of Malaysia. I am working on a dissertation research entitled "Bidirectional of agglomeratiyc hierarchical clustering algorithm". I would be grateful if you could he kind enough to spend it few minutes to watch and see the visualization part of' Bidirectional prototype then answer this research questionnaire which aims to look at the henefits of using visualization and significance of including user in the data exploration process. I am interested to study how visualization part of Bidirectional of agglomerative hierarchical clustering algorithm prototype can help the user or data miners to understand the structure of the data. interactive between the user and the data and include the user in data exploration process. Also through visualization can observe 244 and explore knowled-e's from the data that have been missed by data mining alýorithnls. The purpose of this questionnaire is to help me gain an understanding of the user who will use visualization part of' the Bidirectional of' Agglomerative Hierarchical C'lllsterine Algorithm. and to get any additional feedback or comments About it. All the information you provide is confidential. Your name is not stored with this questionnaire. and the information you provide will not be used for any other purposes. 245 Part 1: General Information 1. Gender: [j Male I] Female ------------------------------------------- 2. Age: Years ------------------------------------------- 3. Qualification: I) Diploma II Bachelor I Master I )PhD ------------------------------------------ 4. Have you used any visualization software before: II Yes I ]No 246 Part 2: Overall Satisfaction. This part is intended to rate your satisfaction with the overall usability and usefulness of the Visualization part of Bidirectional Prototype. Please indicate (Xj in your answer. your agreement with the next set of statements using the following rating scale: I= Strongly Disagree. 2= Disagree. 3= Natural. 4= Agree. 5= Strongly Agree. Table 1: Ouestionnaire auestionc I 1 ul uriveu oi useiuiness I-,, I -i ý Visualization part of bidirectional prototype will enable the user to be interactive with the data. II LI Uving visualization part of bidirectional prototype can help the user explore, observe the knowledge easier. _ FT7 I_- {U'orking with visualization output is easier than x'orking with numeric output. 6 Through visualization humans might catch and observe hidden patterns and rules in data. Through visualization part of bidirectional prototype the user is directly involve and interactive in the data processes by exploitation the power of the human sight and brain for analyzing and exploring data. I= ? 47 Perceived of Use 81 Learning to operate visualization part of bidirectional Prototype is easy. 9 Interaction with visualization part of bidirectional Prototype is clear and understandable. 10 It is easy to become skilful at using visualization to explore and observe the Knowledge. -- - ------- - - Visualization part of bidirectional Prototype is easy to use. User Satisfaction 12 I completely satisfied in using the visualization part of bidirectional Prototype. 13 I feel very confident in using the visualization part of bidirectional Prototype. it easy to stay aware of what is happening in and around their environments huge amounts of data. sability E y to interact with visualization on 's onal Prototype by using free. cedrlre through visualization part of 1 bidirectional prototype by tree is clear. 17 1 found it easy to understand the structure of the data. C'l)R1: 1-1ENT: 248 USABILITY TESTING To cvaluate the usability of the visualization part of bidirectional agglomerative hierarchical clustering algorithm prototype need to evaluate two components. The first is the prototype usefulness: the second is the prototype ease of use both of which are from perspective of TAM (Technology Acceptance Model) the PUEU (Perceived Usefulness. Ease of Use) was adapted from (Davis, 1989). As shown in Fw1ure 1. USABILITY TESTING ti, efulýýeýý Ease of Use l ýýýý Figure I: usability testing plan The content 01. the questions was inspired from Davis (19(89) and Babar et al. (2007). 249 Question 1: Visualization part of bidirectional prototype will enable the user to get the information of the data quickly. Table 2 shows that 54.0c4 of the respondents agreed and 12.0% strongly agreed of that the Visualization will enable the user to get the information of the data quickly. Histogram and Statistic of the Question I are shown in Figures 2 and 3 respectiy'ely. Table 2: Question I response Frequency Percent Valid Percent Cumulative Percent Valid disagree 3 6.0 6.0 6.0 neutral 14 28.0 28.0 34.0 agree 27 54.0 54.0 88.0 strongly agree 6 12.0 12.0 100.0 Total 50 100.0 100.0 Question 1: Visualization part of bidirectional prototype will enable the user to get the information or the aama quicrtiy 7 - rrmi Mean 3.72 Std. Dev. wo, 757 N 50 Figure 2: Question I Histogram 250 Visualization part of bidirectional prototype will enable the user to get the information of the data quickly Q1 Q re. rrn a'BE ®svorig y ayruc / / i / / Figure 3: Question I statistic Questioir2: Visualization part of bidirectional prototype will increase understanding about the data. Ahle 3 shows that 410% of the respondents agreed and 18.0% strongly agreed of that the visualization will l,, crease unde, wtaizdiiig about the data. Histogram and Statistic of the Question 2 are shown in Figures 4 and 5 respectively. Table: Question 3 response Valid disaorec ncutral at-lrcc suonýaly agree Total Frequency 3 16 22 9 50 Percent 6.0 32.0 44.0 18.0 100.0 Valid Percent 6.0 32.0 44.0 18.0 100.0 Cumulative Percent 6.0 )8.0 82.0 100.0 251 Question 2: Visualization part of bidirectional prototype will increase understanding about the data a T U C d 3ý rr d LL i : `7 -4-m el Mean -3.74 Std Dev. =0.828 N 5O ._ ýý .ý / 'ýt i \\ r ý` r i f r! ý Figurc 4: Question 2 Histogram Visualization part of bidirectional prototype will increase understanding about the data 02 13 c, sanrcc, Q rr, ura Q aqýee ý slrongy aq, cc FI'LIrC 5: Question 2 statistic 252 Question3: Visualization part of bidirectional prototype will enable the user to be interactive with the data. Table 4 shows that 46.0%, of the respondents agreed and 20-0(1c strongly agreed of that the visualization will enable the user to be interactive with the data. Histogram and Statistic of the Question 3 are shown in Figures 6 and 7 respectively. Table 4: Question 3 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 1 2.0 2.0 2.0 disagree 3 6.0 6.0 8.0 neutral 13 26.0 26.0 34.0 agree 23 46.0 46.0 80.0 strongly agree 10 20.0 20.0 100.0 Total 50 100.0 100.0 Question 3: Visualization part of bidirectional prototype will enable the user to be interactive with the data Mean "9.76 Std. Dev. X0.916 N "50 %:: U .` C m ý 5 d LL Figure 6: Question 3 Histogram 253 Visualization part of bidirectional prototype will enable the user to be interactive with the data 03 QS v^'IgIy' d15.1'ýrf. P. Q(ilSilgrC(: Q . Cut, al ®J rBi+ Q SIr^'1(7Iti' . 3QrP.? Figure 7: Question 3 statistic Question4: Using visualization part of bidirectional prototype can help the user explore, observe the knowledge easier. Table ti shows that 40.0% of the respondents agreed and 22.0% strongly agreed of that the visualization can help the user explore, observe the knowledge easier. Histogram and Statistic of the Question 4 are shown in Figures 8 and 9 respectively. Table 5: Question 4 response Frequency Percent Valid Percent Cumulative Percent Valid disagree 5 10.0 10.0 10.0 neutral 14 28.0 28.0 38.0 a(ree 20 40.0 40.0 78.0 strongly agree 11 22.0 22.0 100.0 Total 50 100.0 100.0 254 easottype can help the user tio Question 4: Using visualization part of biidirecti ienaldaprot explore, uuaul .... ......... _. -. --- -l: c m 0- p U- %.. ý ý 'ý\\ ý1 t 1 ,` Figure 8: Question 4 Histogram - Ncrnttl Mean -3.74 SM. Dev. =0.422 N e5d bidirec Using visualization part observe the kona nolwledge easican er help the user explore, 04 Odis. igrcu Q rirubal Q rue strcigly agree Figur-e 9: Question 4 statistic 255 Questions: Working with visualization output is easier than working with numeric output. Table 6 shows that 40.0% of the respondents agreed and 24.0% strongly agreed of that the 11"orking with visualization output is easier than working with numeric output. Histogram and Statistic of the Question 5 are shown in Figures 10 and 11 respectively. Table 6: Question 5 response Frequency Percent Valid Percent Cumulative Percent Valid Disagree 4 8.0 8.0 8.0 Neutral 14 28.0 28.0 36.0 Agree 20 40.0 40.0 76.0 strongly agree 12 24.0 24.0 100.0 Total 50 100.0 100.0 Question 5: Working with visualization output is easier than working with numeric output .S T u c rf 3ý cr U. I.., I ý ;ý1 ý1 ý 1 "1 Figure 10: Question 5 histogram - N3r+ral iieen e4.8 Std Dav. ao. 9G1 N . sa 256 Working with visualization output is easier than working with numeric output 05 ý UiS: IUI l'V Q I` (;, III J Qa9e. m"tlfiq'y . 3q! AF Figure 11: Question 5 statistic Question6: Through visualization humans might catch and observe hidden patterns and rules in data. Table 7 shows that 52.0% of the respondents agreed and 18% strongly agreed of that the humans can catch and observe hidden patterns and rules in data via visualization. Histogram and Statistic of the Question 6 are shown in Figures 12 and 13 respectively. Table 7: Question 6 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 1 2.0 2.0 2.0 disagree 1 2.0 2.0 4.0 neutral 13 26.0 26.0 30.0 agree 26 52.0 52.0 82.0 strongly agree 9 18.0 18.0 100.0 Total 50 100.0 100.0 257 and mans might tcatch and observe hidden Question 6: Through vis puaatteliz trnion 4: . >: 7 A u a ý a a- ý LL /, ý ý\ I' I i Si I I 1 _'-1 U Figure 12: Question 6 histogram - Normal Mean 9.82 Std. Dev. =0.825 N-5O Through visualization humans might catch dataand observe hidden patterns and Q6 El stronnly disagn: e Q disagrrr Q neutral agrcc QStIU'1l]ly aQll'U Figure 13: Question 6 statistic 258 Question 7: Through visualization part of bidirectional prototype the user is directly involve and interactive in the data processes by exploitation the power of the human sight and brain for analyzing and exploring data. Table S shows that 42.0`-ß of the respondents agreed and 28(/( strongly agreed of that the user is directly involve and interactive in the data processes by exploitation the power of the human sight and brain for analyzing and exploring data. Histogram and Statistic of the Question 7 are shown in Figures 14 and 15 respectively. Table 8: Question 7 response Frequency Percent Valid Percent Cumulative Percent Valid disagree 2 4.0 4.0 4.0 neutral 13 26.0 26.0 30.0 agree 21 42.0 42.0 72.0 strongly agree 14 28.0 28.0 100.0 Total 50 100.0 100.0 Question 7: Through visualization part of bidirectional prototype the user is directly involve and interactive in the data processes by exploitation the power of the human sight and brain for analyzing and exploring data u a LL 1 ; '-l 7 1 i -ý T - Ku-ml Mann as 0f Ski -6o' Wi43 ,N 50 ý r Figure 14: Question 7 Histogram 259 Through visualization part of bidirectional prototype the user is directly involve and interactive in the data processeslbyiex loitation theg oater of the human .ý ý- sight ana Draw 1-1 Figure 15: Question 7 statistic Q? ý r, ; acrcv Q rF, ura Q ® sl"ong y agree Question& Learning to operate visualization part of bidirectional Prototype is easy. Table 9 shows that 40 f% of the respondents agreed and idirectio cital Prototype e is oeasyt r-I bidirectional the Learning to operate visualization part of Histogram and Statistic of the Question 8 are shown in Figures 16 and 17 respectively. Table 9: Question 8 response Valid strO11 ly disagree disagree nCUtral agree strongly agree Total Frcqucncy --------- 1 3 15 '13 8 50 Percent 2.0 6.0 30.0 46.0 16.0 100.0 Valid Percent 2.0 6.0 30.0 46.0 16.0 100.0 Cumulative Percent 2.0 8.0 38.0 84.0 100.0 260 Question 8: Learning to operate visualization part of bidirectional Prototype is easy 4 .; ýý C a v , - d LL 1_`-I T U - Norrn3i Mean 3.68 Std. Dev. x0.891 N 50 i I II 2d "1 I Figure 16: Question 8 histogram Learning to operate visualization part of bidirectional Prototype is easy Qö ® sl aýg y disagrcc Q cis agree Q ret. va ý Jgruv sl"a-ig-y ag, cu rO Figure 17: Question 8 statistic 261 Question9: Interaction with visualization part of bidirectional Prototype is clear and understandable. Table 10 shows that 42.0% of the respondents agreed and 22.0% strongly agreed of that the Interaction with visualization Prototype is clear and understandable. Histogram and Statistic of the Question 9 are shown in Figures 18 and 19 respectively. Table 10: Question 9 response Frequency Percent Valid Percent Cumulative Percent Valid disagree 4 8.0 8.0 8.0 neutral 14 28.0 28.0 36.0 agree 21 42.0 42.0 78.0 strongly agree 11 22.0 22.0 100.0 Total 50 100.0 100.0 Question 9: Interaction with visualization part of bidirectional Prototype is clear and understandable . ý: "1 T u C d 3 ü d ll I - Nxrel Mean "9.78 Std. Dvv. -0.887 N950 H T 1 I Figure 18: Question 9 histogram T 262 Interaction with visualization part of bidirectional Prototype is clear and understandable 09 ®'disagree Q '7excll Q ay-cc 0slrongly agree Figure 19: Question 9 statistic Question 10: It is easy to become skilful at using visualization to explore and observe the Knowledge. Table 11 shows that 42.0% of the respondent's agreed and 22.0% strongly agreed of that the easy to become skilful at using visualization to explore and observe the Knowledge. Histogram and Statistic of the Question 10 are shown in Figures 20 and 21 respectively. Table 11: Question 10 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 1 2.0 2.0 2.0 disagree 2 4.0 4.0 6.0 neutral 15 30.0 30.0 36.0 agree 21 42.0 42.0 78.0 strongly agree 11 22.0 22.0 100.0 Total 50 100.0 100.0 263 T u C a ý a' d LL Question 10: It Is easy to become skilful at using visualization to explore and observe the Knowledge b" .:. -ý ,; -ý r U I - Mooraal Mean 89.78 Std. Der. =0.91 M-50 T ý r r t / T 3 T 4 1 Figure 20: Question 10 histogram It is easy to become skilful at using visualization to explore and observe the Knowledge Q10 ® siýuýny aicapua Q mst, aruc Q reutra ý aýree QýPU'ipy üyee Figure 21: Question 10 statistic Question 11: Visualization part of bidirectional Prototype is easy to use. Table 12 shows that 38. Oclc of the respondents agreed and 28.0% strongly agreed of that the Visualization part of bidirectional Prototype is easy to use. Histogram and Statistic of the Question 11 are shown in Figures 22 and 23 respectively. 264 Table 12: Question 11 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 3 6.0 6.0 6.0 disagree 3 6.0 6.0 12.0 neutral 11 22.0 22.0 34.0 agree 19 38.0 38.0 72.0 strongly agree 14 28.0 2S. 0 100.0 Total 50 100.0 100.0 Question 11: Visualization part of bidirectional Prototype is easy to use - Norvnl Mean =3,76 Sid. Dev. -1.117 M =5O T u C C) ý a- C) U. I- o: ýa Figure 22: Question 1] histogram 265 Visualization part of bidirectional Prototype is easy to use 011 ý st-cýg"y di5ag"ce Qý; na4rýuý Qf'ULI u . dgfPE Os, Figure ? 3: Question 11 statistic Question 12: I completely satisfied in using the visualization part of bidirectional Prototype. Table 1I shows that 46. OYI of the respondents agreed and 18.0°%% strongly agreed of that the users and data miners are completely satisfied in using the visualization part of bidirectional Prototype. Histogram and Statistic of the Question 12 are shown in Figures 24 and 25 respectively. Table 13: Question 12 response Frequency Percent Valid Percent Cumulative Percent Valid disagree 2 4.0 4.0 4.0 neutral 16 32.0 32.0 36.0 agree 23 46.0 46.0 82.0 strongly agree 9 18.0 18.0 100.0 Total 50 100.0 100.01 1 266 Question 12: I completely satisfied in using the visualization part of bidirectional Prototype 4 u c m ar LL _-I - Normal Mean a9.78 Std. Dev. =0.79 N. 50 / 11 Figure 24: Question 12 histogram I completely satisfied in using the visualization part of bidirectional Prototype 012 ®disagre 0 rn1dtral Qay"cu 0 strongly are Figure 25: Question 12 statistic 2 (i7 I 13 1 /rrl i cri coui/iclrut in using tltc ri. %aali;. atioa part of bidirectional ý(/ý(Iý1 (ýýIi(ý. and 10.0'' I, -'rCcd of tlLj i. tLl I11114'I ý. 11 c. /e'e'l very e'ull/1(/PRI in itsing the visualization lttlll ýtf ltltllltýClletllUl %t/"ttltNl'lte. I lI'lu'-'l; llll ; Ilhd til; lll"lli , tI 111C QLIC1,11011 I. " arc .... ,. '. ý '. .,.. ýI` . tlld I . IhIi IIl IUC'IIoII I . 's fC"poIl`C Iil't11ICI1C\ Pc Icc IlI lll 1'CIiCIll ('unull: ui\r PCIlI'Il( 1-1.0 14.0 1h. 0 n, 'till ,, l 40.0 . t". "I, ", '  4-1.0 44.0 ý)(LO I ,. I! ýIl I11(l, (1 10(1. (1 QUeshon 13 I feel very confident in using the visualization part of bidirectional Prototype - ýý Mwn "J. i! Std. Dw. -0-03 N -50 vT C ý O 6 ý I _. _-T ý wllli' 268 I feel very confident in using the visualization part of bidirectional Prototype 013 m SI'C'1Q )' : IISa(j'C('. QQtiýi7fr: i? Q! 'ULI! ý aýree i:: ) SI'O'1(I)' aG'eP. Figure 27: Question 13 statistic Questionl4: I found it easy to stay aware of what is happening in and around their environments from the huge amounts of data. Table 15 shows that 26.01Ir. of the respondents agreed and 26.0% strongly agreed of that the users and data miners are found it easy to stay aware of what is happening in and around their environments from the huge amounts of data. Histogram and Statistic of the Question 14 are shown in Figures 28 and 29 respectively. Table 15: Question 14 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 1 2.0 2.0 2.0 disagree 1 2.0 2.0 4.0 neutral 22 44.0 44.0 48.0 agree 13 26.0 26.0 74.0 strongly agree 13 26.0 26.0 100.0 Total 50 100.0 100.0 269 Question 14: I found it easy to stay aware of what Is happening in and around their environments from the huge amounts of data ýý %:.. c m 6 d LL 1. -I / i-- T U / 2i4 Figure 28: Question 14 histogram i - 4Drn I Mean 3.72 Std. Dev. 0.948 M -SO I found it easy to stay aware of what is happening in and around their environments from the huge amounts of data Q14 ®sPoipy disap r, o Q c5uaroc Q rer. Va QsPu"iyv ug"r: u Figure 29: Question 14 statistic 270 Questionl5: It's easy to interact with visualization on bidirectional Prototype by using Tree. Table 16 show that 38.0/0 of the respondents agreed and 28.0% strongly agreed of that its easy to interact with visualization on bidirectional Prototype by using Tree. Histogram and Statistic of the Question 16 are shown in Figures 30 and 31 respcctiVely. Table 16: Question 15 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 3 6.0 6.0 6.0 disagree 4 8.0 8.0 14.0 neutral 10 20.0 20.0 34.0 agree 19 38.0 38.0 72.0 strongly agree 14 28.0 28.0 100.0 Total 50 100.0 100.0 Question 15: It's easy to interact with visualization on bidirectional Prototype by using Tree - Nc' al Mesh 63.74 Std. Dov_ =1,139 N 6SO >: ý1 c d ß' d LL 1. -1 U2d "1 Figure 30: Question 15 histogram 271 It's easy to interact with visualization on bidirectional Prototype by using Tree 015 ý st"cng y disap, r. c Or, n qrnr. Orw. lr, r ý agree Osl"cngy ag, af'. Figure 31: Question 15 statistic Question 16: The procedure through visualization part of bidirectional prototype by tree is clear. Table 17 show that 42.010 of the respondents agreed and 22.0% strongly agreed of that the procedure through visualization part of bidirectional prototype by tree is clear. Histogram and Statistic of the Question 16 are shown in Figures 32 and 33 respectively. Table 17: Question 16 response Frequency Percent Valid Percent Cumulative Percent Valid disagree 3 6.0 6.0 6.0 neutral 15 30.0 30.0 36.0 agree 21 42.0 42.0 78.0 strongly agree 11 22.0 22.0 100.0 Total 50 100.0 100.0 272 Question 16: The procedure through visualization part of bidirectional prototype by tree is clear - Nermal Mean -3,8 Std. Dev. =1,857 N 50 T U C C) ý s a LL ,: -l 1 T Figure 32: Question 16 histogram The procedure through visualization part of bidirectional prototype by tree is clear Q16 Q "ll'JIIJi Q ag'Ee U strongly agree Figure 33: Question 16 statistic 273 Question 17: I found it easy to understand the structure of the data. Table 18 show that 46.00/c of the respondents agreed and 20.0% strongly agreed of that found it easy to understand the structure of the data. Histogram and Statistic of the Question 17 are shown in Figures 34 and 35 respectively. Table 18: Question 17 response Frequency Percent Valid Percent Cumulative Percent Valid strongly disagree 1 2.0 2.0 2.0 disagree 3 6.0 6.0 8.0 neutral 13 26.0 26.0 34.0 agree 23 46.0 46.0 80.0 strongly agree 10 20.0 20.0 100.0 Total 50 100.0 100.0 a: " Question 17: I found it easy to understand the structure of the data rcrm Mean =3,76 I--- - _- Std. Dev. =0.8lß N =SO T u a" C) LL ,. ý Ii -r 2s4 1 Figure 34: Question 17 histogram 274 I found it easy to understand the structure of the data 017 ý St-c'10)' dISdG"c[` Q1'. t: l4f('i? Qf'U I ® dýfBE ýst"oýqy d(jreE Figure 35: Question 17 statistic 275 APPENDIX B A\'L TREE CODE package Iln. tianflllal: import jaN ax. >%\ in-'. JOptiunPane: import jaý: ºý. sý+in_ý. JTcxtAºrcº: Cla. ` AV1 { Node 111: \odc n_': dcnºhlc ncliti. Node root: jilt COliilt: int A=(l: //make the tree empty public \oid nlakcEmptyO{ root=null: count=l): } //Find function that rcrun the node of the specified word puhlic Nodc tind(double number. Node root)( Mbile( runt '=null) ifinumhcr==root. dis) return root: rlsc( if( numher root. dis) root = root. right. else return root; // Match } return null; // No match public double findMiu2(Node t, String s){ double e= root. dis, if(root! =null ) { if (root. name. equals(s)) e=root. dis: findN-1in2( root. riaht, s); findMin? ( root. left. s); return c; } public void find2 Mtin2(Node root, String Of Node n= root: if(root! =null) i 277 if (root. name. equal s(s))( n 1=root: } find2Min2( root. left, s); find2Min2( root. right, s); } } public void findn2(Node root){ if(root! =null) { if ( root. nanle. equals(n I . nanle)&&rooLlYillle I. equals(n I . name l )&&root. dis! =n l . dis) { Il2=root; ý l'indn2( root. left); findn2( root. right) } public void findnnlorethen2(Node t){ it-(t! =null) I ii ud Ii Ill orethe 112 ( t. left): l Illdnlllurethcn? (Lrlghl): II (t. Ilanle. el)LIAIS(Il I . nanle)&&t. nanle I . eCjnals(Il I . name I )&ä: t. dls<=Il I . dis. ) Il I =t: else 278 this. delete(t. root); } } public void findMin2b(Node t. String ab, String b){ if(t! =null) { findMin2b( t. left, ab, b); if (t. name. equals(ab)&&t. name l. equals(b)){ ndis=t. dis; }'indMin2b( t. right, ab, b). } public void findandrename(Node t, String a, String b){ if( t != null ){ if(t. name. equals(a)Ilt. name. equals(b)){ t. name=a+b; A++, } else if(t. name 1. equals(a)Ilt. name l equal s(b)) t. name 1=t. name; t. name=a+b; A++; } 279 findandrename(t. left, a, b); findandrename(t. right, a, b); } public void findandrename2(Node t, String a, String b){ if( t != null ){ if(t. name. equals(a)IIt. name. equals(b)){ t. name=a+b; A++: else if(t. name l . eyuals(a)IIt. name I. equal s(b)) t. nanle I =t. nanle; i_name=a+b; A++; } findandrename(t. left, a, b); findandrename(t. riglit, a, b); } public void findandrenamel(Node t, String a, String b, String c, String III){ if(t! =null ){ if( t. nanle. equals(a)Ilt. name. equals(b)Ilt. nanle. equals(c)){ t_nanle=nl+"/"; A++: ý 280 else if(t. nameI. equals(a)IIt. name I. equals(b)IIt. name l. e. quals(c)){ t. nanle I =t. name; i. llallle=lll+ýýýýý; A++. i findandrename 1(t. left, a, b, c, m); findandrename 1(t. right, a, b, c, m); public void findde(Node t, String a, String b, String c, String m){ if(t! =null){ if(t. name. equals(m)&&(t. name I. equal s(b)Ilt. name I. equal s(c)IIt. name l. e. quals(a))) this. delete(t. root), findde(t. left. a. b. c, m); findde(t. right, a, b, c, ºn); } } public void findandadddot(Node t, double a){ if( t! =null) i if(t. dis==a) t. name=t. name+" findandadddot(t. riglit. a): 281 findandadddot(t. left, a); } } //if null //FindMin function taht rerun the Node of the first of the Dictionary.. the most left node public Node findMin(Node t){ If'( t == null return t; while( t. left != null t=t. left; return t: } //Height function that return the height of the Node or -I if the Node equlas null public int height( Node t { return t == null ? -1 : t. height: } //Single Right Rouatate public Node SRR(Node k I) ( Node tempParent =kl . parent; Node k2 = kl. left; //k2 will be in position of kl ( if(tempParent! =null&&(tenipParent. left==k 1)) tempParent. Ieft=k2; else if(tempParent! =null&&(tempParent. right==k 1)) tempParent. right=k2; } kl. left = k2. right; 282 if(k2. right! =null) k2. right. parent=k 1; k2. right = kl : k2. parent=k I . parent. kl . parent=k2: //set the height of the rearranged Nodes kl . height = Math. nlax( height( kl . left height( kl . right)) + 1; k1height = Math. nlax( height( k2. left ), kl. height) + I; i f(k 1==root) root=k2; //if kI was the root, k2 will be the route return k2: //Single Left Routate public Node SLR(Node k 1){ Node tempParent = kl. parent; Node k2 =kI . right; k2. parent=tempParent; ( if(tempParent! =null&&((empParent. left==k 1)) tempParent. left=k2; else if(tempParent! =null&&(tempParent. right==k I)) tempParent. right=k2; } k 1. right=k2. left; if(k2. Ieft! =null) k2. Ieft. parent=k 1; k2. left=k 1; 283 kl . parent=k2; kl . height = Math. max( height( kl . left height( k 1. right)) + 1; k2. height = Math. max( height( k2. left ), kl . height) + 1; if(k I ==root) //if kI was the root, k2 will be the route root=k?; return k2: } //Double Right Routate public Node DRR(Node k 1){ //DRR consists of two Single Routates. first Rouate left then rouatte right k 1. left=SLR(k l . left); return SRR(k 1); } public Node DLR(Node kl){ //DLF consists of two Single Routates, first Rouate right then rouatte left kl . right=SRR(k I . right); return SLR(k I); //insert function that insert anode in the tree and check the balance //after insertion public Node insert(String name. String name 1, double number, Node t, Node p) if( t == null ){ 284 //this procedure do the insertion It = new Node(namc. name ), number), t. parcnt=p. } if(ruunt==0) ruot=t: C(lUilt-F-f : } C1SC It'( Ilulllbcr t. dis) { t. riuht = insert( n, ime. namel. number, t. right. t ); ){ if( hci, _ht( t. rioht )- height( t. letýt )== 2 ih numher> t. dis) t=S1_R( t elnc 287 t= DLR( t ); ) } else : //set the height of the new node t. hei«ht = Math-max( height( Lieft ), height( t. right ))+l; retUfll t: //Delete function that delete aNode from the tree //retun true if' the node deleted and false if the tree empty public hoolean delete(Node t. Node r){ if((r==null)) //if the treee empty retuun false return false: else if((t. left==null)ýý&(t. right==null)) delWithNoChild(t): else i1((t. left==null)II(t. right==null)) del WiIhOneChild(t): else delWithChilds(t): return true: f //deletc the node with no ehilds hri\atc hoolcan dclWlthNOChlld(NOde Of i t'( t==root ) r 286 count=0; t=nuII; else Node tenipParent = t. parent; if(tcnlpParcnt. left==t) te ni p Pare ii t. 1 ei t=ii nil: else tcnlpParent. rigllt=111111; tempParent. height=Math. max(height(tempParent. left), height(tempParent. right) )+I ; i f( (height( tempParent. right)-heiýht(tempParent. left))==2 ) SLR(tempParent); else ii'((height(tempParent. Ieft)-height(tempParent. right))==2) SRR(tempParent); 1'ellll'll true; i //delete the node with only one child private hoolean delWithOneChild(Node t){ if(t==root)//in case of the deleted node is the root { if(t. ri`.: ht==null) i 287 root=t. left; t. left. parent=n U II; } else { root=t. right; t. right. pare nt=null; } } else Node temp =t; If(I. right==null) temp=t. left; else temp=t. right; Node tempParent=t. parent; if(tempParent. ri(, ht==t) tempParent. rlght=temp; else tempParent. Iet t=temp; tenip. parent=tempParent; tempParent. height=Math. max(height(tempParent. right), height(temp)); it'(height(tempParent. right)-height(tenipParent. left)==2) SLR(tempParent); 288 else if(height(tempParent. left)-height(tempParent. right)==? ) SRR(tempParent); } return true; } //delete the node with two childs private boolean delWithChilds(Node t){ Node temp=findMin(t. right); t. name=(temp. name); t. name I =(temp. name 1); t. dis=(temp. dis); delWithNoChild(temp); return true; } public void printTree(Node t, JTextArea txt){ it'(t! =11 L111) { txt. append(t. name+". "+t. name I +"\t"+t. dis+"\t"+t. height+"\n"); printTree( t. lett, txt ); printTree( t. right, txt ); J } public void maincluster(Node r) { t