The following are only suggested topics. You are not restricted to them. They are organised by research area to assist your search.
Each of the topics represent significant recent successes in their areas. Some are small enough that most of the major papers could be covered in a few weeks. Some are too large for all the major papers to be read, but a smaller subset could be identified. The proposers names are given in parentheses after the topic name. If you are stuck for initial references then these people might be able to give you some pointers.
if x_i has value v_i and x_j lies in the range [a,b] and .. then outcome = o_nHowever, such a set of rules can be cumbersome and insufficiently concise for practical purposes; after all, the set of such rules does give a precise result for every member of the original data set, and can be a large set.
Often, users might like something smaller but less precise: a set of fuzzy rules that doesn't classify everything in the original training set perfectly, or a set of rules that reaches a classification by a point-scoring process: add 3 for the presence of this feature, deduct 5 for the presence of that (compound) feature, and so on; if the final total is positive then accept else reject. For example, this is how a US Patriot missile station decides which aircraft to target :-(, there being a requirement that the set of rules is small and very fast to apply so that the electronics can continuously monitor many hundreds of aircraft simultaneously. Yet other approaches use different rule formats, eg "if at least N of the following M conditions apply: [..,..,..,...], the result is X"
Survey the state of the art. Pay attention to matters such as efficiency, accuracy of the final rule set, scalability to huge data sets (eg millions of mortgage applications). Part of the problem for you will be discovering appropriate search keywords and appropriate conference proceedings to look at. You might like to begin by inspecting www.kdnuggets.com, an on-line newsletter about data mining that contains many links to commercial systems and vendors although not so many research activities.
..ACTGGCTTAA ||||| GAATTCCGGT..representing a two-step fragment that might, or might not, be part of the shortest tour. In general, single-step molecular fragments are so tiny and can be so abundant that a huge space of possible solutions can be explored, effectively in parallel - although, of course, not with perfect reliability.
Since then the field of DNA computing has spawned a lot of interest, adapting and generalising the idea to tasks such as large-scale propositional reasoning. Survey the state of the art; pay attention to questions of combinatorics, scalability, speed and reliability for solving large practical problems, as compared with more conventional computational techniques. You will find various websites devoted to DNA computing; the main ones [private opinion] may not turn up in the first few shown by your favourite search engine. There is no single journal for the topic yet. Do not confuse Leonard Adleman (USC) with Leonard Adelman (GMU) :-)