Skip to content

Bibliography References for my Research on Artificial Intelligence, Machine Learning and Multi-Agent Systems

Artificial Intelligence Bibliography References

The following is the list of bibliography references I used in my research on Artificial Intelligence, Machine Learning, Multi-agent Systems, Artificial Neural Networks, Reinforcement Learning, Evolutionary Algorithms, Genetic Algorithms, and Neuro-Evolution of Augmenting Topologies algorithms.

[1]A. Abbas and Sawamura H. ALES: An innovative-agent-based learning environment to teach argumentation. International Journal of Knowledge-Based and Intelligent Engineering Systems, 15(1):25–41, 2011.

[2] H. Ackley, E. Hinton, and J. Sejnowski. A learning algorithm for boltzmann machines. Cognitive Science, pages 147–169, 1985.

[3]R. Adobbati, A. N. Marshall, A. Scholer, and S. Tejada. Gamebots: A 3d virtual world test-bed for multi-agent research. In Proceedings of the Second International Workshop on Infrastructure for Agents, MAS, and Scalable MAS, 2001.

[4]M. Albert, T. Laengle, H. Woern, M. Capobianco, and A. Brighenti. Multi-agent systems for industrial diagnostics. Proceedings of 5th IFAC Symposium on Fault Detection, Supervision and Safety of Technical Processes, pages 483–488, 2003.

[5]R. C. Arkin. Behavior-Based Robotics. MIT Press, 1998.

[6]J. L. Austin. How to Do Things with Words. University Press, Oxford, 1962.

[7]F. Balbo and S. Pinson. A transportation decision support system in agent-based environment. Intelligent Decision Technologies, 1(3):97–115, 2007.

[8] T. Balch. Javasoccer. In H. Kitano, editor, RoboCup-97: Robot Soccer World Cup I, volume 1395 of Lecture Notes in Computer Science, pages 181–187. Springer Berlin / Heidelberg, 1998.

[9] F. L. Bellifemine, G. Caire, and D. Greenwood. Developing Multi-Agent Systems with JADE. Wiley, Chichester, UK, 2007.

[10] S. Ben-David, E. Kushilevitz, and Y. Mansour. Online learning versus offline learning. Machine Learning, 29(1):45–63, October 1997.

[11] B. Benz, J. Durant, and J. Durant. XML Programming Bible. Wiley Publishing, Inc., 909 Third Avenue. New York, 2003.

[12] M. D. Berg, M. V. Kreveld, M. Overmars, and O. Schwarzkopf. Computational Geometry: Algorithms and Applications. Springer-Verlag, 2nd edition, 2000.

[13] T. Berners-Lee. Weaving the Web. Orion Business, London, 1999.

[14] H. G. Beyer. The Theory of Evolutionary Strategies. Springer, Berlin, 2001.

[15] R. Bhandari. Survivable Networks: Algorithms for Diverse Routing. Springer, 1999.

[16]J. P. Bigus and J. Bigus. Constructing Intelligent Agents Using Java. Wiley, New York, 2nd edition, 2001.

[17]D. Billings, A. Davidson, J. Schaeffer, and D. Szafron. The challenge of poker. Artificial Intelligence Journal, 134(1-2):201–240, 2002.

[18]D. M. Bourg and G. Seeman. AI for Game Developers. O’Reilly Media, July 2004.

[19]J. M. Bradshaw. Software Agents. AAAI Press / The MIT Press, 1997. [20]M. Bratman. Intention, Plans, and Practical Reason. Harvard University Press, 1987.

[21]M. E. Bratman. Intention, Plans, and Practical Reason. Harvard University Press, Cambridge, USA, 1999.

[22]L.  Breiman.  Bagging predictors. Machine Learning, Kluwer Academic Publishers Hingham, MA, USA, 24(2):123–140, 1996.

[23]C. Brown, P. Barnum, D. Costello, G. Ferguson, B. Hu, and M. V. Wie. Quake II as a robotic and multi-agent platform. Technical report, University of Rochester, 2004.

[24]C. Brown, G. Ferguson,  P.  Barnum,  B. Hu,  and D. Costello.  Quagents: A game platform for intelligent agents. In Proceedings of Artificial Intelligence and Interface Entertainment (AIIDE) International Conference, pages 9–14, 2005.

[25]B. D. Bryant. Evolving Visibly Intelligent Behavior for Embedded Game Agents. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, 2006. Technical Report AI-06-334.

[26]B. D. Bryant and R. Miikkulainen.   Neuroevolution for adaptive teams.   In Proceedings of the 2003 Congress on Evolutionary Computation (CEC 2003), pages 2194–2201, Piscataway, NJ, 2003. IEEE.

[27]B.  D.  Bryant  and R. Miikkulainen. Exploiting sensor symmetries in example-based training for intelligent agents. In S. M. Louis and G. Kendall, editors, Proceedings of the IEEE Symposium on Computational Intelligence and Games, pages 90–97. IEEE, Piscataway, NJ, 2006.

[28]B. D. Bryant and R Miikkulainen. Acquiring visibly intelligent behavior with example-guided neuroevolution. In Proceedings of the 22nd National Conference on Artificial Intelligence, pages 801–808. AAAI Press, Menlo park, 2007.

[29]B.D. Bryant and R. Miikkulainen. Evolving stochastic controller networks for intelligent game agents. In Proceedings of the 2006 IEEE Congress on Evolutionary Computation, pages 1007–1014. IEEE Press, 2006.

[30] M. Buckland. AI Techniques for Game Playing. Premier Press, Cincinnati, OH, USA, 2002.

[31] M. Buckland. Programming Game AI by Example. Wordware Publishing, Inc., Plano, Texas, USA, 2005.

[32]M. Buro. The othello match of the year: Takeshi murakami vs. logistello. ICCA Journal, 20(3):189–193, 1997.

[33]M. Buro. ORTS: A hack-free RTS game environment. In J. Schaeffer, M. Mu¨ller, and Y. Bjo¨rnsson, editors, Computers and Games, volume 2883 of Lecture Notes in Computer Science, pages 280–291. Springer Berlin / Heidelberg, 2003.

[34]M. Buro and T. Furtak. RTS games and real-time AI research. In Proceedings of the Behavior Representation in Modeling and Simulation Conference (BRIMS), pages 34–41. Arlington VA, 2004.

[35]T. J. Callantine. Air traffic controller agents. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems, Held in Melbourne, Australia on July 14-18, 2003.

[36]L. M. Camarinha-Matos and H. Afsarmanesh. Virtual enterprise modeling and support infrastructures:  Applying multi-agent system approaches.  In

J. G. Carbonell and J. Siekmann, editors, Mutli-agents Systems and Appli- cations, pages 335–364. Springer-Verlag Inc., New York, NY, USA, 2001.

[37]M. Campbell, A. J. Hoane Jr., and F. h. Hsu.  Deep Blue. Artificial Intelligence, 134(1-2):57–83, 2002.

[38] G.  A.  Carpenter  and S. Grossberg. ART 2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics, 26(23):4919–4930, 1987.

[39] G. A Carpenter and S. Grossberg. Neural networks for vision and image processing. MIT Press, Cambridge, Mass, 1992.

[40]M. Chung, M. Buro, and J. Schaeffer. Monte Carlo planning in RTS games. In Proceedings of IEEE Symposium on Computational Intelligence and Games (CIG’05), pages 117–124, 2005.

[41]P. Cohen and H. Levesque.  Intention is choice with commitment. Artificial Intelligence, 42:213–261, 1990.

[42]P. R. Cohen and H. J. Levesque. Speech acts and rationality. In Proceedings of the 23rd annual meeting on Association for Computational Linguistics (ACL ’85), pages 49–60, Chicago, Illinois, USA, 1985. Association for Computational Linguistics.

[43]P. R. Cohen and C. R. Perrault. Elements of a plan-based theory of speech acts. Technical report, Cognitive Science, 1979.

[44]N. Cole, S. Louis, and C. Miles. Using a genetic algorithm to tune first-person shooter bots. In Congress on Evolutionary Computation (CEC’04), volume 1, page 139145. Piscataway, NJ. IEEE, 2004.

[45] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press and McGraw-Hill, 2nd edition, 2001.

[46] R. S. Cost, T. Finin, Y. Labrou, X. Luan, Y. Peng, I. Soboroff, and J. May- field. JACKAL: A java-based tool for agent development. In Software Tools for Developing Agents, pages 73–83. AAAI Press, 1998.

[47] G. F. Coulouris, J. Dollimore, and T. Kindberg. Distributed Systems: Concepts and Design. Addison-Wesley, University of Michigan, fourth edition, 2005.

[48] K. Decker, K. Sycara, and M. Williamson. Intelligent adaptive information agents. Journal of Intelligent Information Systems, 9:239–260, 1996.

[49] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, Springer Berlin, Heidelberg, Germany, 1:269–271, 1959.

[50] T. D’Silva, R. Janik, M. Chrien, K. O. Stanley, and R. Miikkulainen. Retaining learned behavior during real-time neuroevolution. Artificial Intelligence and Interactive Digital Entertainment, 2005.

[51]G. Dudek, M. Jenkin, E. Milios, and D. Wilkes. Taxonomy for swarm robots. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’93), volume 1, pages 441–447. IEEE, Piscataway, NJ, USA, 1993.

[52]M. Fasli. Agent technology for e-commerce. Chichester, West Sussex, England, 2007.

[53]E. Feigenbaum, P. McCorduck, and H. P. Nii. The Rise of the Expert Com- pany. Times Books, New York, 1988.

[54]C. Ferreira. Gene expression programming: A new adaptive algorithm for solving problems. Complex Systems, 13(2):87–129, 2001.

[55]A. Filippidis, L.C. Jain, and N.M. Martin. Fusion of intelligent agents for the detection of aircraft in SAR images. IEEE Transactions on Pattern Analysis and Machine Intelligence, USA, 22(3):378–384, March 2000.

[56]N. V. Findler. Studies in machine cognition using the game of poker. Communications of the ACM, 20(4):230–245, 1977.

[57]T. Finin, Y. Labrou, and J. Mayfield. KQML as an agent communication language. In Software Agents, pages 456 – 480. AAAI Press / The MIT Press, 1997.

[58] R. Finkel and J. L. Bentley. Quad trees: A data structure for retrieval on composite keys. Acta Informatica, 4(1):1–9, 1974.

[59] FIPA. Fipa agent management specification, technical report xc00023h. Technical report, Foundation for Intelligent Physical Agents, http://www.fipa.org, 2000.

[60] FIPA. FIPA    abstract architecture specification, technical report sc00001l. Technical report, Foundation for Intelligent Physical Agents, http://www.fipa.org, 2002.

[61]D. Fogel. Using evolutionary programming to create networks that are capable of playing Tic-Tac-Toe. In Proceedings of IEEE International Conference on Neural Networks, pages 875–880. IEEE, San Francisco, 1993.

[62] D. Fogel. Review of computational intelligence: Imitating life. IEEE Trans. on Neural Networks, 6:1562–1565, 1995.

[63]D. B. Fogel. Blondie24: Playing at the Edge of AI. Kaufmann, San Fran- cisco, 2001.

[64] L. J. Fogel, A. J. Owens,  and M. J. Walsh. Artificial Intelligence through Simulated Evolution. John Wiley, New York, 1966.

[65]S. D. Fowell and R. Ward. The role of software agents in space operations. In Proceedings of 2002 Conference on Space Operations (SpaceOps 2002), Texas, USA, 2002.

[66]Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In International Conference on Machine Learning, pages 148–156, 1996.

[67]S. Fricke, K. Bsufka, J. Keiser, T. Schmidt, R. Sesseler, and S. Albayrak. Agent-based telematic services and telecom applications. Commun. ACM, 44(4):43–48, 2001.

[68]E. Friedman-Hill. JESS in Action: Java Rule-Based Systems. Manning Publications Co., New York, 2003.

[69]Leo Galway, Darryl Charles, and Michaela Black. Machine learning in digital games: A survey. Artificial Intelligence Review, 29:123–161, 2008. 10.1007/s10462-009-9112-y.

[70]M. Gardner. How to build a game-learning machine and then teach it to play and to win. Scientific American, 206(3):138–144, 1962.

[71]B. Geisler.   An empirical study of machine learning algorithms applied to modeling player behavior in a First-Person Shooter video game. Master’s thesis, Department of Computer Sciences, University of Wisconsin- Madison, Madison, WI, 2002.

[72]M. R. Genesereth and R. E. Fikes. Knowledge interchange format, version

3.0 reference manual. Technical report, Computer Science Department, Stanford University, 1992.

[73]D. E. Goldberg and J. Richardson. Genetic algorithms with sharing for multimodal function optimization. In J. J. Grefenstette, editor, Proceedings of the Second International Conference on Genetic Algorithms, pages 148–

154. Morgan Kaufmann, San Francisco, CA, 1987.

[74]F. Gomez and R. Mikkulainen. Incremental evolution of complex general behavior. Adaptive Behavior, 5(3-4):317–342, 1997.

[75]V. Gorodetsky, O. Karsaev, V. Samoylov, and S. Serebryakov. Agent-based distributed decision making in dynamical operational environments. Intelligent Decision Technologies, 3(1):35–57, 2009.

[76] S. Graham, D. Davis, S. Simeonov, G. Daniels, P. Brittenham, Y. Naka- mura, P. Fremantle, D. Koenig, and C. Zentner. Building Web services with Java: making sense of XML, SOAP, WSDL, and UDDI. Developer’s Library, 2002.

[77] D. Greenwood, G. Vitaglione, L. Keller, and M. Calisti. Service level agreement management with adaptive coordination. International Conference on Networking and Services (ICNS’06), 0:45, 2006.

[78] N. Griffiths and K.-M. Chao, editors. Agent-Based Service-Oriented Com- puting. Springer-Verlag London Dordrecht Heidelberg New York, 1st edi- tion, 2010.

[79] J. L. Gross and J. Yellen. Handbook of Graph Theory. CRC Press, 2004.

[80] S. Grossberg. Competitive learning: From interactive activation to adaptive resonance. Cognitive Science, 11:23–63, 1987.

[81] J. Hagelb  a¨ck and S. J. Johansson. Using multi-agent potential fields in real-time strategy games. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS ’08), pages 631–638, Richland, SC, 2008. International Foundation for Autonomous Agents and Multiagent Systems.

[82] J. Hagelb  a¨ck and S. J. Johansson.  A multiagent potential field-based bot for real-time strategy games. International Journal Of Computer Games Technology, 2009:1–10, 2009.

[83] K. Haider, J. Tweedale, and L.C. Jain. An intelligent decision support system using expert systems in a MAS. In New Advances in Decision Technologies, Proceedings of the First International Symposium on Intelligent Decision Technologies, Himeji Japan, pages 213–222. Springer-Verlag, April 2009.

[84]K. Han, E. Lee, and Y. Lee. The impact of a peer-learning agent-based pair programming in a programming course. IEEE Transactions on Education, 53(2):318–327, May 2010.

[85]M. D. Hansen. SOA Using Java EE 5 Web Services. Pearson Education, Inc., RR Donnelley in Crawfordsville, Indiana, 2007.

[86]E. R. Harold. Processing XML with Java: A Guide to SAX, DOM, JDOM, JAXP, and TrAX. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 2002.

[87] P. E. Hart,  N. J. Nilsson,  and B. Raphael.  A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics SSC4, 4(2):100–107, July 1968.

[88] Thomas Hartley and Quasim Mehdi. Online action adaptation in interactive computer games. Computers in Entertainment (CIE), 7(2):1–31, 2009.

[89] E. J. Hastings, R. Guha, and K. O. Stanley. Evolving content in the galactic arms race video game. In Proceedings of the IEEE Symposium on Computational Intelligence and Games(CIG’09). IEEE, Piscataway, NJ, 2009.

[90] J. Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cam- bridge, MA, 1985.

[91]A. L. G. Hayzelden and R. A. Bourne. Agent Technology For Communication Infrastructures. John Wiley & Sons, Ltd, London, UK, 2001.

[92] D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949. [93]C. Hewitt. Viewing control structures as patterns of passing messages. Artificial Intelligence, 8(3):323–364, 1977.

[94]C. Hewitt, P. Bishop, and R. Steiger. A universal modular actor formalism for artificial intelligence. In IJCAI, pages 235–245, 1973.

[95] G. E. Hinton and T. J. Sejnowski, editors. Unsupervised Learning: Foundations of Neural Computation. MIT Press, 1999.

[96] J. H. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. MIT Press, Cambridge, 1975.

[97] J. H. Holland. Escaping brittleness: The possibility of general-purpose learning algorithms applied to parallel rule-based systems. In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, editors, Machine Learning, pages 593–624. Morgan Kaufmann, Los Altos, 1986.

[98] J.H. Holland. Outline for a logical theory of adaptive systems. ACM, 3:297–314, 1962.

[99]J.-H. Hong and S.-B Cho. Evolution of emergent behaviors for shooting game characters in robocode. In Proceedings of the 2004 IEEE Congress on Evolutionary Computation (CEC’04), volume 1, pages 634–638. Piscataway, NJ, 2004.

[100] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Science, 79:2554–2558, 1982.

[101] J. J. Hopfield. Neurons with graded responses have collective computational properties like those of two-state neurons. In Proceedings of the National Academy of Sciences (USA), volume 81, pages 3088–3092, 1984.

[102]N. Howden, R. Rnnquist, A. Hodgson, and A. Lucas. JACK Intelligent Agents – summary of an agent infrastructure. In Proceedings of the 5th International Conference on Autonomous Agents, 2001.

[103]F. Hsu. Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton University Press, Princeton, 2002.

[104]D. L. Hudson and M. E. Cohen. Use of intelligent agents in the diagnosis of cardiac disorders. In Computers in Cardiology, pages 633–636. Memphis, Sept. 23-25 2002.

[105]P. Jackson. Introduction to Expert Systems. Addison-Wesley, 3rd edition, 1999.

[106] L. C. Jain, S. C. Tan, and C. P. Lim. An introduction to computational intelligence paradigms. In Computational Intelligence Paradigms, volume 137 of Studies in Computational Intelligence, pages 1–23. Springer Berlin, Heidelberg, 2008.

[107]J. Jarvis, D Jarvis, R R  o¨nnquist, and L.C. Jain.   A flexible plan step execution model for BDI agents. Journal of Multiagent and Grid Systems, 4(4):359–370, 2008.

[108]J. Jarvis, R R  o¨nnquist, D Jarvis, and L.C. Jain.  Holonic Execution: A BDI Approach. Springer-Verlag,, 2008.

[109]N. Jennings and M. Wooldridge. Software agents. IEE Review, The Institution of Engineering and Technology (IET), United Kingdom, 42(1):17–20, 1996.

[110]N. R. Jennings, P. Faratin, T. J. Norman, P. O’Brien, and B. Odgers. Autonomous agents for business process management. Int. Journal of Applied Artificial Intelligence, 14(2):145–189, 2000.

[111]N. R. Jennings, P. Faratin, T. J. Norman, P. O’Brien, B. Odgers, and J. L. Alty. Implementing a business process management system using adept: A  real-world case study. Int. Journal of Applied Artificial Intelligence, 14(5):421–465, 2000.

[112]H. Jeon, C. Petrie, and M. R. Cutkosky. JATLite: A java agent infrastructure with message routing. IEEE Internet Computing, 4(2):87–96, 2000.

[113]C.-W. Jeong, D.-H. Kim, and S.-C. Joo. Mobile collaboration framework for u-healthcare agent services and its application using pdas. In N. T. Nguyen, A. Grzech, R. J. Howlett, and L. C. Jain,  editors,  Proceedings of Agent and Multi-Agent Systems: Technologies and Applications (KES- AMSTA ’07), volume 4496 of Lecture Notes in Artificial Intelligence, pages 747–756. Springer Berlin, Heidelberg, 2007.

[114]M. T. Jones. AI Application Programming. Charles River Media, Inc. Hingham, Massachusetts, 2003.

[115]G. A. Kaminka, M. M. Veloso, S. Schaffer, C. Sollitto, R. Adobbati, A. N. Marshall, A. Scholer, and S. Tejada. Gamebots: A flexible test-bed for multi-agent team research. Communications of the ACM, 45(1):43–5, 2002.

[116]G. A. Kaminka, A. Yakir, D. Erusalimchik, and N. Cohen-Nov. Towards collaborative task and team maintenance. In Proceedings of the 6th Inter- national Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS ’07), pages 1–8, New York, NY, USA, 2007. ACM.

[117]E. Kang, Y. Im, and U. Kim. Remote control multiagent system for u- healthcare service. In N. T. Nguyen, A. Grzech, R. J. Howlett, and L. C. Jain, editors, Proceedings of Agent and Multi-Agent Systems: Technologies and Applications (KES-AMSTA), volume 4496 of Lecture Notes in Artificial Intelligence, pages 636–644. Springer Berlin, Heidelberg, 2007.

[118]M. Khazab, J. Tweedale, and L. C. Jain. Dynamic Applications Using Multi-Agents Systems. In Teodorescu, HN and Watada, J and Jain, LC, editor, Intelligent Systems and Technologies: Methods and Applications, volume 217 of Studies in Computational Intelligence, pages 67–79. Springer, New York, NY, 2009.

[119]M. Khazab, J. Tweedale, and L. C. Jain.  Interoperable Intelligent Agents in a Dynamic Environment. In Nakamatsu, K and PhillipsWren, G and Jain, L. C. and Howlett, R. J., editor, New Advances in Decision Technologies, volume 199 of Studies in Computational Intelligence, pages 183–191. Springer-Verlag Berlin, Heidelberg, Germany, 2009.

[120]M. Khazab, J. Tweedale, and L. C. Jain.  Web-based  Multi-Agent System Architecture in a Dynamic Environment. International Journal of Knowledge-based and Intelligent Engineering Systems, 14:217–227, December 2010.

[121]S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.

[122] T. Kohonen. Self-organising formation of topologically correct feature maps. Biological Cybernetics, 43:59–69, 1982.

[123] J. P. Koza. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, 1992.

[124]D. Kuokka and L. Harada. On using KQML for matchmaking. In Proceed- ings of the First International Conference on Multiagent Systems (ICMAS), pages 239–245. AAAI Press, Menlo Park, CA, 1995.

[125]Y. Labrou, T. Finin, and Y. Peng. Agent communication languages: The current landscape. IEEE Intelligent Systems and Their Applications, IEEE Computer Society, US, 14(2):45–52, 1999.

[126]J. E. Laird. An exploration into computer games and computer-generated forces. In Proceedings of the Eighth Conference on Computer Generated Forces and Behavior Representation. Orlando, FL, USA, May 2000.

[127]J. E. Laird. It knows what you’re going to do: Adding anticipation to a quakebot. In Proceedings of the Fifth International Conference on Au- tonomous Agents (AGENTS ’01), pages 385–392. ACM, New York, NY, USA, 2001.

[128]J. E. Laird and J. C. Duchi. Creating human-like synthetic characters with multiple skill levels: A case study using the SOAR Quakebot. In Proceedings of the Fall Symposium Series: Simulating Human Agents (AAAI 2000), pages 54–58, November 2000.

[129]J. E. Laird, A. Newell, and P. S. Rosenbloom. SOAR: An architecture for general intelligence. Artificial Intelligence, 33:1–64, 1987.

[130]J. E. Laird and P.  S. Rosenbloom. The evolution of the soar cognitive architecture. In T. Mitchell, editor, Mind Matters, pages 1–50, 1994.

[131]D. Lambert and J. Scholz.  Ubiquitous command and control. Intelligent Decision Technologies, 1(3):157–173, 2007.

[132]G. Lanzola and H. Boley. Experience with a functional-logic multi-agent architecture for medical problem-solving pages 17–37. IGI Publishing, Hershey, PA, USA, 2002.

[133]J.  Leng,  C.  Fyfe,  and L. Jain. Reinforcement learning of competitive skills with soccer agents. In B. Apolloni, R. Howlett, and L. Jain, editors, Knowledge-Based Intelligent Information and Engineering Systems, volume 4692 of Lecture Notes in Computer Science, pages 572–579. Springer Berlin / Heidelberg, 2007.

[134]J. Leng, C. Fyfe, and L.C. Jain. Simulation and reinforcement learning with soccer agents. Journal of Multiagent and Grid Systems, 4(4):415– 436, 2008.

[135]J. Leng, J. Li, and L.C. Jain. A role-oriented BDI framework for real-time multiagent teaming. Intelligent Decision Technologies: An International Journal, 2(4):205–217, 2008.

[136]M. V. Lent, J. Laird, J. Buckman, J. Hartford, S. Houchard, K. Steinkraus, and R. Tedrake. Intelligent agents in computer games. In Proceedings of The Sixteenth National Conference on Artificial Intelligence, pages 929– 930. AAAI Press, Orlando, FL, USA, July 1999.

[137]P. Lichocki, K. Krawiec, and W. Jakowski. Evolving teams of cooperating agents for real-time strategy game. In M. Giacobini, A. Brabazon, S. Cagnoni, G. Di Caro, A. Ekrt, A. Esparcia-Alczar, M. Farooq, A. Fink, and P. Machado, editors, Applications of Evolutionary Computing, volume 5484 of Lecture Notes in Computer Science, pages 333–342. Springer Berlin / Heidelberg, 2009.

[138]R.  Linsker. Self-organization in a  perceptual network. Computer, 21(3):105–117, 1988.

[139] R. P. Lippmann.  An introduction to computing with neural nets. IEEE ASSP Magazine, 4(2):4–22, April 1987.

[140] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. In Machine Learning, pages 285–318, 1988.

[141] M. Ljungberg and A. Lucas. The OASIS air traffic management system. In Proceedings of the 2nd Pacific Rim International Conference on Artificial Intelligence, volume 2, pages 1183–1189. Seoul, Republic of Korea, Sept, 1992.

[142] T. Loboda, P. Brusilovsky, and G. Grady. An agent for versatile intelligence analysis. Intelligent Decision Technologies: An International Jour- nal, 5(1):17–30, 2011.

[143] S. M. Lucas and G. Kendall.  Evolutionary computation and games. IEEE Computational Intelligence Magazine, 1(1):10–18, 2006.

[144]S. M. Lucas and J. Togelius.   Point-to-point car racing:  an initial study of evolution versus temporal difference learning. In IEEE Symposium on Computational Intelligence and Games, 2007 (CIG ’07), pages 260 –267, April 2007.

[145]M. Luck, P. McBurney, and C. Preist. Agent Technology: Enabling next-generation computing: a roadmap for agent-based computing. Agentlink, 2003.

[146]C. Maderia, V. Corruble, and G. Ramalho. Designing a reinforcement learning-based adaptive AI for largescale strategy games. In J. Laird and J. Schaeffer, editors, Proceedings of the 2nd Artificial Intelligence and Interactive Digital Entertainment Conference, pages 121–123. AAAI Press, Menlo Park, 2006.

[147]C Maderia, V Corruble, G Ramalho, and B Ratitch. Bootstrapping the learning process for the semi-automated design of a challenging game AI. In D Fu, S Henke, and J Orkin, editors, Challenges in Game Artificial Intelligence: Papers from the 2004 AAAI Workshop, pages 72–76. AAAI Press, Menlo Park, 2004.

[148]P. Maes. Agents that reduce work and information overload. Communications of the ACM, 37(7):30–40, 1994.

[149]T. Martinetz and K. Schulten. A “neural-gas” network learns topologies. Artificial Neural Networks, I:397–402, 1991.

[150]R. M. A. Mateo, L. F. Cervantes, H.-K. Yang, and J. Lee. Mobile agent using data mining for diagnostic support in ubiquitous healthcare. In N. T. Nguyen, A. Grzech, R. J. Howlett, and L. C. Jain,  editors,  Proceedings of Agent and Multi-Agent Systems: Technologies and Applications (KES- AMSTA), volume 4496 of Lecture Notes in Artificial Intelligence, pages 795–804. Springer Berlin, Heidelberg, 2007.

[151]J. McCarthy. Programs with common sense. In Symposium on Mechanization of Thought Processes. National Physical Laboratory, Teddington, England, 1958.

[152]P. McCorduck. Machines who think. Freeman. 1979, pages 1–375, 1979. San Francisco, CA, USA.

[153] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.

[154]D. Michie. Trial and error. Penguin Science Survey, 2:129–145, 1961. [155]M. Midtgaard, L. Vinther, J. Christiansen, A. Christensen, and Y. Zeng.

[155] Time-based reward shaping in real-time strategy games. In L. Cao, A. Bazzan, V. Gorodetsky, P. Mitkas, G. Weiss, and P. Yu, editors, Agents and Data Mining Interaction, volume 5980 of Lecture Notes in Computer Science, pages 115–125. Springer Berlin / Heidelberg, 2010.

[156] R. Miikkulainen, B. D. Bryant, R. Cornelius, I. V. Karpov, K. O. Stanley, and C. H. Yong. Computational intelligence in games. In G. Y. Yen and D. B. Fogel, editors, Computational Intelligence: Principles and Practice. IEEE Computational Intelligence Society, Piscataway, NJ, 2006.

[157] M. L. Minsky and S. A. Papert. Perceptrons. Cambridge, MA: MIT Press, 1969.

[158]Marvin Minsky. Society of Mind. Simon and Schuster, Pymble, Australia, 1985.

[159]A. Moreno and J. L. Nealon. Applications of Software Agent Technology in the Health Care Domain. Whitestein, 2003.

[160]D. E. Moriarty. Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, 1997. Technical Report UT-AI97-257.

[161]D.  E.  Moriarty  and R. Miikkulainen. Efficient reinforcement learning through symbiotic evolution. Machine Learning, 22:11–32, 1996.

[162]M. M  u¨ller. Computer Go. Artificial Intelligence, 134(1-2):145–179, 2002.

[163]S. Ghoreishi Nejad, R. Martens, and R. Paranjape. An agent-based diabetic patient simulation. In N.T. Nguyen, G.S. Jo, R.J. Howlett, and L.C. Jain, editors, Proceedings of Agent and Multi-Agent Systems: Technologies and Applications (KES-AMSTA), volume 4953 of Lecture Notes in Artificial Intelligence, pages 832–841. Springer Berlin, Heidelberg, 2008.

[164]N. Nilsson. Artificial Intelligence: A New Synthesis. Morgan Kaufmann Publishers, 1998.

[165]M. Nodine and A. Unruh. Facilitating open communication in agent systems: The infosleuth infrastructure. In M. Singh, A. Rao, and M. Wooldridge, editors, Intelligent Agents IV Agent Theories, Architectures, and Languages, volume 1365 of Lecture Notes in Computer Science, pages 281–295. Springer Berlin / Heidelberg, 1998.

[166] S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Technical report, Technical Report 9019, Center for Research in Language, University of California, San Diego, 1990.

[167]E. Norling. Capturing the quake player: Using a BDI agent to model human behaviour. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS ’03), pages 1080– 1081. ACM, New York, NY, USA, 2003.

[168]H. S. Nwana. Software agents: An overview. In: McBurney, P. (ed.) The Knowledge Engineering Review, Cambridge Journals, Simon Parsons, City University of New York, USA, 11(3), 1996.

[169]P. D. O’Brien and R. C. Nicol. FIPA towards a stance for software agents. British Telecom Technology Journal, 16:3, 1998.

[170]P.-Y. Oudeyer and J.-L. Koning. Modeling soccer-robots strategies through conversation policies. In D. Kotz and F. Mattern, editors, Agent Systems, Mobile Agents, and Applications, volume 1882 of Lecture Notes in Computer Science, pages 583–623. Springer Berlin / Heidelberg, 2000.

[171] L. Panait and S. Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11(3):387–434, 2005.

[172]H. V. D. Parunak. Manufacturing experience with the contract net. In M. Huhns, editor, Distributed Artificial Intelligence, pages 285–310. Pit-man Publishing: London and Morgan Kaufmann: San Mateo, CA, 1987.

[173]R. Patil, R. Fikes, P. Patel-Schneider, D. P. McKay, T. Finin, T.  Gruber,  and R. Neches. The DARPA knowledge-sharing effort:  Progress report. In B. Nebel, editor, Proceedings of the Third International Conference on Principles Of Knowledge Representation And Reasoning, August 1992.

[174]S. Petrakis and A. Tefas.  Neural networks training for weapon selection in first-person shooter games. In K. Diamantaras, W. Duch, and L. Iliadis, editors, Artificial Neural Networks (ICANN’10), volume 6354 of Lecture Notes in Computer Science, pages 417–422. Springer Berlin / Heidelberg, 2010.

[175]M. Pfeiffer. Reinforcement learning of strategies for settlers of catan. In Q. Mehdi, N. Gough, S. Natkin, and D. Al-Dabass, editors, Proceedings of the 5th International Conference on Computer Games: Artificial Intelligence, Design and Education. The University of Wolverhampton, pages 384–388, 2004.

[176] W. Pitts and W. S. McCulloch. How we know universals: The perception of auditory and visual forms. Bulletin of Mathematical Biophysics, 9:127147, 1947.

[177]J. B. Pollack, A. D. Blair, and M. Land. Coevolution of a backgammon player. In C. G. Langton and K. Shimohara, editors, Proceedings of the 5th International Workshop on Artificial Life: Synthesis and Simulation of Living Systems (ALIFE-96). Cambridge, MA: MIT Press, 1996.

[178]M. Ponsen, P. Spronck, and K. Tuyls.  Hierarchical reinforcement learning with deictic representation in a computer game.  In P-Y Schobbens,  W Vanhoof, and G Schwanen, editors, Proceedings of the 18th Belgium- Netherlands Conference on Artificial Intelligence. University of Namur, pages 251–258, 2006.

[179]Mark Ponsen and I. P. H. M. Spronck. Improving adaptive game AI with evolutionary learning. In Proceedings of the 5th International Conference on Computer Games: Artificial Intelligence, Design and Education (CGAIDE’04), pages 389–396. University of Wolverhampton, 2004.

[180]D. Poole, A. Mackworth, and  R. Goebel. Computational Intelligence: A Logical Approach. Oxford University Press, New York, 1998.

[181]A. S. Rao and M. P. Georgeff. An abstract architecture for rational agents. In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning (KR ’92), pages 439–449, 1992.

[182]A. S. Rao and M. P.  Georgeff.  BDI agents:  from theory to practice.  In  1st International Conference on Multi-Agent Systems, pages 312–319. San Francisco, CA, 1995.

[183] I. Rechenberg. Evolutionary strategy. In J. M. Zurada, R. Marks II, and C. Robinson, editors, Computational Intelligence: Imitating Life, pages 147–159. IEEE Press, Los Alamitos, 1994.

[184] C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. In Proceedings of Computer Graphics Conference (SIGGRAPH ’87), volume 21(4), pages 25–34, 1987.

[185]N. Richards, D. Moriarty, P. McQuesten, and R. Miikkulainen. Evolving neural networks to play Go.  In T. Ba¨ck, editor, Proceedings of the Seventh International Conference on Genetic Algorithms (ICGA-97), pages 768– 775, East Lansing, MI, 1997. San Francisco: Kaufmann.

[186]A. Rollings and E. Adams. Andrew Rollings and Ernest Adams on Game Design. New Riders Publishing, May 2003.

[187] F. Rosenblatt. The perceptron, a probabilistic model for information storage and organisation in the brain. Psychological Review, 65:386–408, 1958.

[188]T. J. Ross. Fuzzy Logic with Engineering Application. Wiley, Chichester, UK, 3rd edition, 2010.

[189] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel distributed processing: Explorations in the Microstructure of Cognition, volume 1. MIT Press, Cambridge, MA, 1986.

[190] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice-Hall, Pearson Education, Inc., Upper Saddle River, NJ, USA, 2nd edition, 2003.

[191]A. L. Samuel. Some studies in machine learning using the game of Checkers. IBM Journal of Research and Development, 3(3):210–229, 1959.

[192]A. L. Samuel. Some studies in machine learning using the game of checkers. ii: Recent progress. IBM J. Res. Dev., 11(6):601–617, 1967.

[193]J. Schaeffer. One Jump Ahead: Challenging Human Supremacy in Checkers. Springer-verlag, Berlin, Germany, 1997.

[194]J. Schaeffer. The games computers (and people) play. In Zelkowitz M., ed- itor, Advances in Computers, volume 50, pages 189–266. Academic Press, San Diego, 2000.

[195]J. Schaeffer, N. Burch, Y. Bjornsson, A. Kishimoto, M. Muller, R. Lake, P. Lu, and S. Sutphen. Checkers is solved. Science, 317(5844):1518–1522, July 2007.

[196]Z. Schiel, A. Bullock, and M. Rorie. Unofficial Guide to Unreal Tournament 2004. Stratos Group Inc, 2004.

[197] H.-P Schwefel. Numerical Optimization of Computer Models. John Wiley, Chichester, UK, 1981.

[198] J.R. Searle. Speech Acts. Cambridge University Press, 1969.

[199]S. Seely and K. Sharkey. SOAP: Cross Platform Web Services Development Using XML. Prentice-Hall, Upper Saddle River, NJ, USA, 2001.

[200]F. Serce, F. Alpaslan, and L.C. Jain. Intelligent learning system for online learning. International Journal of Hybrid Intelligent Systems, IOS Press, 5(3):129–141, 2008.

[201]R. A. Serway and J. W. Jewett. Physics for Scientists and Engineers with Modern Physics. John Vondeling, 2000.

[202]W. Shen, Q. Hao, H. J. Yoon, and D. H. Norrie. Applications of agent-based systems in intelligent manufacturing: An updated review. Advanced Engineering Informatics, 20(4):415–431, 2006.

[203] A. Siddiqi and S. M. Lucas. A Comparison of Matrix Rewriting versus Direct Encoding for Evolving Neural Networks. In 1998 IEEE International Conference on Evolutionary Computation pages 392–397. IEEE Press, 1998.

[204]B. G. Silverman, A. Normoyle, P. Kannan, R. Pater, D. Chandrasekaran, and G. Bharathy. An embeddable testbed for insurgent and terrorist agent theories. Intelligent Decision Technologies, 2(4):195–203, 2008.

[205]B.G. Silverman. Systems social science: A design inquiry approach for stabilization and reconstruction of social systems. Intelligent Decision Technologies: An International Journal, 4(1):51–74, 2010.

[206]C. Sioutis. Reasoning and Learning for Intelligent Agents. PhD thesis, Electrical and Information Technology, University of South Australia, Australia, 2006.

[207]I. A. Smith, P. R. Cohen, J. M. Bradshaw, M. A. Greaves, and H. A. Holm- back. Designing conversation policies using joint intention theory. In P. R. Cohen, editor, Proceedings International Conference on Multi-Agent Systems, pages 269–276, 1998.

[208] K. O. Stanley. Efficient Evolution of Neural Networks through Complexification. PhD thesis, Artificial Intelligence Laboratory, The University of Texas at Austin, 2004.

[209] K. O. Stanley, B. D. Bryant, I. Karpov, and R. Miikkulainen. Real-time evolution of neural networks in the NERO video game.  In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI- 2006, Boston, MA), pages 1671–1674. Meno Park, CA: AAAI Press, 2006.

[210]K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Evolving neural network agents in the NERO video game. In Proceedings of the IEEE 2005 Symposium on Computational Intelligence and Games (CIG’05). Piscataway, NJ: IEEE, 2005.

[211] K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Real-time neuroevolution in the NERO video game. IEEE Transactions on Evolutionary Computation, 9:653–668, 2005.

[212] K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Technical Report AI2001-290, Department of Computer Sciences, The University of Texas at Austin, 2002.

[213]P. Stone. Multiagent competitions and research: Lessons from RoboCup and TAC. In G. A. Kaminka, P. U. Lima, and R. Rojas, editors, RoboCup- 2002: Robot Soccer World Cup VI, volume 2752 of Lecture Notes in Artificial Intelligence, pages 224–237. Springer Verlag, Berlin, 2003.

[214] R. S. Sutton and A.  G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.

[215] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.

[216]N. Taghezout and P. Zarate.  Supporting a multicriterion decision making multi-agent negotiation in manufacturing systems. Intelligent Decision Technologies: An International Journal, 3(3):139–155, 2009.

[217]I. Tanev, M. Joachimczak, and K. Shimohara. Evolution of driving agent, remotely operating a scale model of a car with obstacle avoidance capabilities. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO ’06, pages 1785–1792, New York, NY, USA, 2006. ACM.

[218]G. Tesauro. TD-gammon, a self-teaching backgammon program achieves master-level play. Neural Computation, 6:215–219, 1994.

[219]G. Tesauro.  Temporal difference learning and TD-gammon. Communications of the ACM, 38(3):58–68, 1995.

[220]S. Thatcher, L. Jain, and C. Fyfe. Theoretical framework for a knowledge-based intelligent enhancement to an aircraft ground proximity warning system. In Proc. 2nd International Conference on Artificial Intelligence in Science and Technology (AISAT ’04), pages 36–40, Hobart, Australia, 21-25 November 2004.

[221]S.J. Thatcher. Improving Aviation Safety with Intelligent Agent Technology: A Conceptual Framework for Embedding Intelligent Agents in the Airline Environment. PhD thesis. PhD thesis, University of South Australia, 2009.

[222]Steve Thatcher, Lakhmi Jain, and Colin Fyfe. An intelligent aircraft landing support system. In Knowledge-Based Intelligent Information and Engineering Systems, volume 3213 of Lecture Notes in Artificial Intelligence, pages 74–79. Springer-Verlag, 2004.

[223]L. C. Thomas. Games, Theory and Applications. Dover Publications, Mineola N.Y., 2003.

[224]C. Thurau, C. Bauckhage, and G. Sagerer. Learning human-like movement behavior for computer games. In Proceedings of the 8th International Conference on the Simulation of Adaptive Behavior (SAB04). Los Angeles, CA, USA, July 2004.

[225]Christian Thurau, C. Bauckhage, and G. Sagerer. Combining self-organizing maps and multilayer perceptrons to learn bot-behaviour for a commercial game. In Proceeding of GAME-ON, pages 119–124, 2003.

[226]J. Togelius and S. M. Lucas. Evolving controllers for simulated car racing. In The 2005 IEEE Congress on Evolutionary Computation, number 1906- 1913. IEEE Press, 2005.

[227]A. Turing. Intelligent machinery. In D. M. B. Meltzer, editor, Machine Intelligence, volume 5, pages 3–23. Edinburgh University Press, Orginally, a National Physics Laboratory Report, 1948.

[228]A Turing. Computing machinery and intelligence. In Mind, volume 59(236), pages 433 – 460. Unpublished until 1968, 1950.

[229]J. Tweedale, N. Ichalkaranje, C. Sioutis, P.  Urlings,  and L. Jain.  Build- ing a decision-making framework using agent teams. Intelligent Decision Technologies: An International Journal, 1(4):175–181, 2007.

[230]P. Urlings, J. Tweedale, C. Sioutis, and N. Ichalkaranje. Intelligent agents and situation awareness. In Knowledge-Based Intelligent Information and Engineering Systems, volume 2774 of Lecture Notes in Computer Science, pages 723–733. Springer Berlin / Heidelberg, 2003.

[231]S. Wallace and J. Laird. Toward a methodology for ai architecture evaluation: Comparing soar and CLIPS. In N. Jennings and Y. Lesprance, editors, Intelligent Agents VI. Agent Theories Architectures, and Languages, volume 1757 of Lecture Notes in Computer Science, pages 117–131. Springer Berlin / Heidelberg, 2000.

[232] C. J. C. H. Watkins and Peter Dayan. Technical note: Q-learning. Machine Learning, 8(3):279–292, May 1992.

[233] S. Whiteson, P. Stone, K. O. Stanley, R. Miikkulainen, and N. Kohl. Automatic feature selection in neuroevolution. In Proceedings of the 2005 Genetic and Evolutionary Computation Conference (GECCO ’05), pages 1225–1232. ACM Press, Washington, D.C., 2005.

[234] D. Whitley. Genetic algorithms and neural networks. In J. Periaux, M. Galan, and P. Cuesta, editors, Genetic Algorithms in Engineering and Computer Science, pages 203–216. John Wiley and Sons, 1995.

[235] L. D. Whitley. The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. In Proceedings of the 3rd International Conference on Genetic Algorithms, pages 116–123, San Francisco, CA, USA, 1989. Morgan Kaufmann Publishers Inc.

[236]G. Wickler and S. Potter. Information gathering: From sensor data to decision support in three simple steps. Intelligent Decision Technologies, 3(1):3–17, 2009.

[237]M. Wooldridge. Verifiable semantics for agent communication languages. pages 349–356, Paris, France, 1998.

[238]M. Wooldridge. Verifying that agents implement a communication language. In Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence, AAAI ’99/IAAI ’99, pages 52 – 57, Orlando, Florida, United States, 1999. American Association for Artificial Intelligence.

[239]M. Wooldridge. An Introduction to MultiAgent Systems. John Wiley and Sons Ltd, Chichester, UK, 2002.

[240]M. Wooldridge and N. R. Jennings. Agent theories, architectures, and languages: A survey. In Proceedings of the Workshop on Agent Theories, Architectures, and Languages on Intelligent Agents (ECAI’94), pages 1–39, Springer-Verlag, New York, 1995.

[241]M. Wooldridge and N. R. Jennings. Intelligent agents: Theory and practice. Knowledge Engineering Review, Cambridge [England]; New York, NY: Cambridge University Press, 10(2):115 – 152, 1995.

[242]M. Wooldridge, J.P. Muller, and M. Tambe. Agent theories, architectures, and languages: A bibliography. In Intelligent Agents II Agent Theories, Architectures, and Languages, pages 408 – 31. Springer Berlin / Heidelberg, 1996.

[243]M. J. Wooldridge. Reasoning about Rational Agents. The MIT Press, Cambridge, Massachusetts, London, England, 2000.

[244] X. Yao. Evolving artificial neural networks. In Proceedings of IEEE, volume 87, pages 1423–1447, 1999.

[245] C. H. Yong, K. O. Stanley, R. Miikkulainen, and I. V. Karpov. Incorporating advice into evolution of neural networks. In J. Laird and J. Schaeffer, editors, Proceedings of the Second Artificial Intelligence and Interactive Digital Entertainment Conference, pages 98–104. AAAI Press, Menlo Park, 2006.

[246]T. Yoshioka, S. Ishii, and M. Ito.  Strategy acquisition for the game othello based on reinforcement learning. In S. Usui and T. Omori, editors, Proceedings of the Fifth International Conference on Neural Information Processing, pages 841–844. Tokyo: IOS Press., 1998.

[247]L. A. Zadeh. Fuzzy sets. Information and Control, 8(3):338–353, 1965.

[248]Y. Zaki and S. Pierre. Mobile agents in distributed meeting scheduling: A case study for distributed applications. Intelligent Decision Technologies: An International Journal, 1:71–82, 2007.

1 thought on “Bibliography References for my Research on Artificial Intelligence, Machine Learning and Multi-Agent Systems”

  1. Pingback: Abstract of my Research on Artificial Intelligence, Machine Learning and Multi-Agent Systems - Brainy Loop

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.