当前位置:编程学习 > JAVA >>

在测试crawler4j的controller类时抛出NoClassDefFoundError这个异常,该怎么解决?

我在测试crawler4j的controller类时抛出NoClassDefFoundError这个异常,该怎么解决?我把相应的jar包都加进去了,但是还是不行,是不是还差一些jar包啊,求大神打救!
下面是我的源代码:
package service_impl;

import org.junit.Test;

import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

public class Controller {

public static void main(String[] args) throws Exception {
        String crawlStorageFolder = "/data/crawl/root";
        int numberOfCrawlers = 7;

        CrawlConfig config = new CrawlConfig();
        config.setCrawlStorageFolder(crawlStorageFolder);

        /*
         * Instantiate the controller for this crawl.
         */
        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

        /*
         * For each crawl, you need to add some seed urls. These are the first
         * URLs that are fetched and then the crawler starts following links
         * which are found in these pages
         */
        controller.addSeed("http://www.ics.uci.edu/~welling/");
        controller.addSeed("http://www.ics.uci.edu/~lopes/");
        controller.addSeed("http://www.ics.uci.edu/");

        /*
         * Start the crawl. This is a blocking operation, meaning that your code
         * will reach the line after this only when crawling is finished.
         */
        controller.start(TaobaoImageCrawler.class, numberOfCrawlers);    
    }   
    
}



package service_impl;

import java.io.File;
import java.util.regex.Pattern;

import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.BinaryParseData;
import edu.uci.ics.crawler4j.url.WebURL;
import edu.uci.ics.crawler4j.util.IO;

/**
 * @author Yasser Ganjisaffar <lastname at gmail dot com>
 */

/*
 * This class shows how you can crawl images on the web and store them in a
 * folder. This is just for demonstration purposes and doesn't scale for large
 * number of images. For crawling millions of images you would need to store
 * downloaded images in a hierarchy of folders
 */
public class TaobaoImageCrawler extends WebCrawler {

        private static final Pattern filters = Pattern.compile(".*(\\.(css|js|mid|mp2|mp3|mp4|wav|avi|mov|mpeg|ram|m4v|pdf"
                        + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

        private static final Pattern imgPatterns = Pattern.compile(".*(\\.(bmp|gif|jpe?g|png|tiff?))$");

        private static File storageFolder;
        private static String[] crawlDomains;

        public static void configure(String[] domain, String storageFolderName) {
                TaobaoImageCrawler.crawlDomains = domain;

                storageFolder = new File(storageFolderName);
                if (!storageFolder.exists()) {
                        storageFolder.mkdirs();
                }
        }

        @Override
        public boolean shouldVisit(WebURL url) {
                String href = url.getURL().toLowerCase();
                if (filters.matcher(href).matches()) {
                        return false;
                }

                if (imgPatterns.matcher(href).matches()) {
                        return true;
                }

                for (String domain : crawlDomains) {
                        if (href.startsWith(domain)) {
                                return true;
                        }
                }
                return false;
        }

        @Override
        public void visit(Page page) {
                String url = page.getWebURL().getURL();

                // We are only interested in processing images
                if (!(page.getParseData() instanceof BinaryParseData)) {
                        return;
                }

                if (!imgPatterns.matcher(url).matches()) {
                        return;
                }

                // Not interested in very small images
                if (page.getContentData().length < 10 * 1024) {
                        return;
                }

                // get a unique name for storing this image
                String extension = url.substring(url.lastIndexOf("."));
                String hashedName = extension;
                // store image
                IO.writeBytesToFile(page.getContentData(), storageFolder.getAbsolutePath() + "/" + hashedName);

                System.out.println("Stored: " + url);
        }
}

我尝试了很多方法都解决不了,求各位帮忙!
crawler4j 异常 NoClassDefFoundError --------------------编程问答-------------------- 这是抛出的异常:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/http/util/Args
at org.apache.http.conn.scheme.Scheme.<init>(Scheme.java:90)
at edu.uci.ics.crawler4j.fetcher.PageFetcher.<init>(PageFetcher.java:93)
at service_impl.Controller.main(Controller.java:23)
Caused by: java.lang.ClassNotFoundException: org.apache.http.util.Args
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 3 more
--------------------编程问答-------------------- 我在apache geronimo 上发布程序(使用了httpclient),也报这个错误,但是在别的server如tomcat都好的呀,不知道怎么回事,急死人啦 --------------------编程问答-------------------- 你用的 httpcore-x.x.x.jar 是 4.2 以前的版本吧?
在 4.3 以后才加入的 org.apache.http.util.Args
换成 4.3 试试 --------------------编程问答-------------------- http://bbs.csdn.net/topics/390660601?page=1#post-396260720 同求解 --------------------编程问答--------------------
引用 2 楼 broadLove 的回复:
我在apache geronimo 上发布程序(使用了httpclient),也报这个错误,但是在别的server如tomcat都好的呀,不知道怎么回事,急死人啦

版主 求救啊:http://bbs.csdn.net/topics/390660601?page=1#post-396260720 --------------------编程问答--------------------
引用 5 楼 broadLove 的回复:
Quote: 引用 2 楼 broadLove 的回复:

我在apache geronimo 上发布程序(使用了httpclient),也报这个错误,但是在别的server如tomcat都好的呀,不知道怎么回事,急死人啦

版主 求救啊:http://bbs.csdn.net/topics/390660601?page=1#post-396260720
你那个是 httpcore 包重复导入的问题(你导入了俩,一个 4.3 一个 4.0.1),已回复。
补充:Java ,  Web 开发
CopyRight © 2022 站长资源库 编程知识问答 zzzyk.com All Rights Reserved
部分文章来自网络,