Feedback on paper of Merabti and Llewellyn-Jones. HYPOTHESES The paper's hypotheses are left implicit, but are most clearly articulated at the beginning of the section on the experiments on p39. * "... this [our distributed trust method] would be an effective, scalable way to enforce DRM in ubiquitous computing." * "... we could use such a system in a resource-limited environment." In general, where hypotheses are not explicitly stated, one might reasonably conclude, as is the case here, that any claims said to be confirmed by the experimental results are intended to be the hypotheses. METHODOLOGY The methodology is experimental, backed up by some reasoned argument as to why the proposed method might be expected to work, both technically and sociologically. The experiments simulate a real network with an artificially generated one of 10,000 nodes. The number of arcs is not stated, but there seems to be an implicit assumption that each node is connected to only a few other nodes (its neighbours). The limitations of this experimental set-up are implicitly acknowledged early in the paper. "The techniques we're proposing are novel and still under development, so it would be grossly premature to suggest we can provide a solution to all the difficulties involved." With the above caveat, the results of the experiment strongly support both hypotheses. There is no statistical analysis of the significance of the results, but the effects are so strong that one might conclude that such an analysis would be superfluous. FLAWS The following criticisms seem to me to be justified. 1. As acknowledged in the paper, any final conclusion would be premature. For instance, the experimental set-up was simulated, and one would want to see the same results in a real-world situation. Note, however, the serious logistic difficulties of conducting a real-world experiment. This is not a serious proposition in a small academic project. 2. More seriously, only one experimental set-up seems to have been tested. It would not have been difficult to regenerate the network a few times, and repeat the experiments to see how robust the results were. 3. The experimental comparator, what they call the 'standard technique', is a bit of a straw man, i.e., it is surely not the best rival technique to compare their system with. It's almost not necessary to do the experiment to see that it was bound to succeed. So, their strong results may will be an artefact of this choice of a comparator. 4. As well as given a percentage for illegitimate content that was successfully blocked, results should also have been given of the percentage of legitimate content that was wrongly blocked. Otherwise, blocking all content gives a perfect result. 5. There is no related work section. Thus it is hard to assess the originality of the work. One hint that it is not wholly original is give on p41: "Its real strength is that it relies on the collaborative sharing of data between devices, a technique that has been successfully applied to other areas such as spam filtering." One would like to here more about these applications of similar techniques to spam filtering. Also, one would like to know whether their 'standard technique' is really very standard, or whether there are better techniques in common use. 6. The sociological issues are not well addressed. For instance, there is no discussion of how to combat sizable communities forming who conspire to share illegitimate content and collectively ignore the trust rankings. 7. On the whole the paper is well presented. My main criticism of its presentation is that the equations (1), (2) and (3) are presented, on p36-37, without any attempted explanation nor context. In contrast, the much simpler (4), (5) and (6) are explained well and in the context of their use. The references to cellular automata are rather superflous. All that is needed (and used) is a network simulation. YOUR REVIEWS Generally, good reviews. Here are some of my most common criticisms. * The main contribution of the paper was to describe a new technique for DRM. This was often omitted from the list of "kinds of contribution". * Since the hypotheses were not explicitly stated, but had to be inferred, it was was inevitably that some of you would mis-identify them. In particular, several people mistook the early, scene-setting remarks, such as that ubiquitous computing is growing and that existing DRM techniques will become inadequate, as the main hypotheses of the paper. This sometimes led to unfair criticisms. * Many people spotted a few of the flaws 1-7 above, but no one spotted most or all of them. However, I did not expect 100% agreement with my analysis and did not penalise its absence. * Some people identified the experiments as exploratory. Note, however, that the hypotheses were effectively identified from the outset and the experiments were designed to confirm them -- not discover them.