Android应用开发:网络工具——Volley(二)

时间:2022-08-17 14:41:35

引言

Android应用开发:网络工具——Volley(一)中结合Cloudant服务介绍了Volley的一般使用方法。当中包括了两种请求类型StringRequest和JsonObjectRequest。一般的请求任务相信都能够通过他们完毕了,只是在千变万化的网络编程中,我们还是希望能够对请求类型、过程等步骤进行全然的把控。本文就从Volley源代码角度来分析一下。一个网络请求在Volley中是怎样运作的。也能够看作网络请求在Volley中的生命周期。

源头RequestQueue

在使用Volley前,必须有一个网络请求队列来承载请求,所以先分析一下这个请求队列是怎样申请,假设运作的。

在Volley.java中:

    /**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @param stack An {@link HttpStack} to use for the network, or null for default.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
} if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
} Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start(); return queue;
} /**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}

通常使用的是第二个接口。也就是仅仅有一个參数的newRequestQueue(Context context),使stack默觉得null。能够看到我们得到的RequestQueue是通过RequestQueue申请。然后又调用了其start方法,最后返回给我们的。接下来看一下RequestQueue的构造方法:

     /**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
* @param delivery A ResponseDelivery interface for posting responses and errors
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
} /**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
} /**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
*/
public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}

RequestQueue有三种构造方法,通过newRequestQueue(Context context)调用的是最后一种。创建了一个工作池,默认承载网络线程数量为4个。

而后两种构造方法都会调用到第一个,进行了一些局部变量的赋值。并没有什么须要多说的,接下来看start()方法:

     public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

首先进行了stop操作,将所有的运行者所有退出,从而确保当前没有不论什么正在工作的运行者。然后基本的工作就是开启一个CacheDispatcher和符合线程池数量的NetworkDispatcher。首先分析CacheDispatcher。

CacheDispatcher缓存操作

CacheDispatcher为缓存队列处理器,创建伊始就被责令開始工作start(),由于CacheDispatcher继承于Thread类,所以须要看一下它所复写的run方法:

     @Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache.
mCache.initialize(); //初始化一个缓存 while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take(); //在缓存序列中获取请求,堵塞操作
request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) { //若该请求已经被取消了。则直接跳过
request.finish("cache-discard-canceled");
continue;
} // Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey()); //尝试在缓存中查找是否有缓存数据
if (entry == null) {
request.addMarker("cache-miss"); //若没有则缓存丢失,证明这个请求并没有获得实施过,扔进网络请求队列中
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
} // If it is completely expired, just send it to the network.
if (entry.isExpired()) { //若请求已经过期,那么就要去获取最新的消息,所以依旧丢进网络请求队列中
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
} // We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<? > response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders)); //请求有缓存数据且没有过期。那么能够进行解析,交给请求的parseNetworkReponse方法进行解析,这种方法我们能够在自己定义个Request中进行复写自己定义
request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { //假设请求有效且并不须要刷新,则丢进Delivery中处理。终于会触发如StringRequest这种请求子类的onResponse或onErrorResponse
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else { //请求有效,可是须要进行刷新。那么须要丢进网络请求队列中
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry); // Mark the response as intermediate.
response.intermediate = true; // Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
} } catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}

CacheDispatcher做了非常多事情。之后再来慢慢的消化他们。如今先看一下我们的请求通过add之后到了哪里去。查看RequestQueue.java的add方法:

     /**
* Adds a Request to the dispatch queue.
* @param request The request to service
* @return The passed-in request
*/
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request); //增加到当前的队列中,是一个HashSet
} // Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network.若这个请求不须要被缓存,须要直接做网络请求,那么就直接加到网络请求队列中
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
} // Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); // Volley中使用请求的URL作为存储的key
if (mWaitingRequests.containsKey(cacheKey)) { //若等待的请求中有与所请求的URL同样的请求,则须要做层级处理
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<? >>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests); //若与已有的请求URL同样,则创建一个层级列表保存他们,然后再放入等待请求列表中
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null); //若是一个全新的请求。则直接放入等待队列中,注意数据为null。仅仅有多个url产生层级关系了才有数据
mCacheQueue.add(request); //放入缓存队列中。缓存队列会对请求做处理
}
return request;
}
}

这里的mCacheQueue就是放入CacheDispatcher的那个堵塞队列,所以在add中加入到mCacheQueue后。由于CacheDispatcher已经执行起来了,所以CacheDispatcher会对刚刚加入的网络请求做处理。分析到这里。能够进行一下阶段性的梳理:

1. 我们的请求在增加到RequestQueue后,首先会增加到事实上体类的mCurrentRequests列表中做本地管理

2. 假设之前已经存在了和本次请求相同URL的请求,那么会将层级关系保存在mWaitingRequests中,若没有则层级关系为null,相同也会保存在mWaitingRequests中

3. 对于没有层级关系(新的URL)的网络请求会直接放入mCacheQueue中让CacheDispatcher对其进行处理

分析到这里发现对于同一个URL的请求处理比較特殊。当第一次做某个网络请求A时候。A会直接放入缓存队列中由CacheDispatcher进行处理。下一次进行同一个URL的请求B时,若此时A还存在于mWaitingRequests队列中则B的请求被雪藏,不放入mCacheQueue缓存队列进行处理,仅仅是等待。那么等待到什么时候呢?不难猜想到是须要等待A的请求完成后才干够进行B的请求。

归结究竟就是须要知道mWaitingRequest是怎样运作的?什么时候存储在当中的层级结构才会被拿出来进行请求。临时记下这个问题,如今回头再去继续分析CacheDispatcher。CacheDispatcher对请求的处理能够归结为下面几种情况:

1. 对于取消的请求。直接表示为完毕并跳过;

2. 对于尚未有应答数据的、数据过期、有明显标示须要刷新的请求直接丢入mNetworkQueue,mNetworkQueue同mCacheQueue一样,是一个堵塞队列;

3. 对于有应答数据且数据尚未过期的请求会出发Request的parseNetworkResponse方法进行数据解析,这种方法能够通过继承Request类进行复写(定制)。

4. 对于有效应答(不管是否须要更新)都会用mDelivery进行应答,须要刷新的请求则会再次放入到mNetworkQueue中去。

对于(1)暂不做分析。后边会遇到。下边分析一下mNetworkQueue的运作原理,mNetworkQueue是在CacheDispatcher构造时传入的參数,通过RequestQueue的start()方法不难分析出相相应的处理器为NetworkDispatcher。

NetworkDispatcher网络处理

在RequestQueue的start()方法中。NetworkDispatcher存在多个,其数量等于RequestQueue构造时候传入的网络处理线程数量相等。默觉得4个。

    public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

每个dispatcher被创造后都及时进行了start()操作,而NetworkDispatcher也是继承于Thread的类,那么之后须要分析其复写的run方法。在这之前先看一下它的构造方法:

    public NetworkDispatcher(BlockingQueue<Request<?>> queue,
Network network, Cache cache,
ResponseDelivery delivery) {
mQueue = queue;
mNetwork = network;
mCache = cache;
mDelivery = delivery;
}

mQueue即为mNetworkQueue,这与CacheDispatcher中使用到的是同一个。而mNetwork默认是BasicNetwork。mCache为缓存,mDelivery为终于的消息配发者。之后会分析到。

接下来看其复写的run()方法:

    @Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); //设置线程可后台执行,不会由于系统休眠而挂起
Request<?> request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take(); //mQueue即为mNetworkQueue,从mNetworkQueue中获取请求,也就是说CacheDispatcher丢过来的请求是从这里被NetworkDispatcher获取到的。注意这里获取请求是堵塞的。 } catch (InterruptedException e) { //退出操作,NetworkDispatcher被设置成退出时候发出中断请求
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
} try {
request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) { //若请求已经被取消,则标记为完毕(被取消),然后继续下一个请求
request.finish("network-discard-cancelled");
continue;
} addTrafficStatsTag(request); // Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request); //使用BasicNetwork处理请求
request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
} // Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse); //处理网络请求应答数据
request.addMarker("network-parse-complete"); // Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
} // Post the response back.
request.markDelivered(); //标记请求为已应答并做消息分发处理
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError); //若产生Volley错误则会触发Request的parseNetworkError方法以及mDelivery的postError方法
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e)); //对于未知错误,仅仅会触发mDelivery的postError方法。 }
}
}

mNetwork.performRequest是真正的网络请求实施的地方,这里对BasicNetwork不做分析。网络请求的回应是NetworkResponse类型,看一下这个类型是怎么样的:

/**
* Data and headers returned from {@link Network#performRequest(Request)}.
*/
public class NetworkResponse {
/**
* Creates a new network response.
* @param statusCode the HTTP status code
* @param data Response body
* @param headers Headers returned with this response, or null for none
* @param notModified True if the server returned a 304 and the data was already in cache
*/
public NetworkResponse(int statusCode, byte[] data, Map<String, String> headers,
boolean notModified) {
this.statusCode = statusCode;
this.data = data;
this.headers = headers;
this.notModified = notModified;
} public NetworkResponse(byte[] data) {
this(HttpStatus.SC_OK, data, Collections.<String, String>emptyMap(), false);
} public NetworkResponse(byte[] data, Map<String, String> headers) {
this(HttpStatus.SC_OK, data, headers, false);
} /** The HTTP status code. */
public final int statusCode; /** Raw data from this response. */
public final byte[] data; /** Response headers. */
public final Map<String, String> headers; /** True if the server returned a 304 (Not Modified). */
public final boolean notModified;
}

NetworkResponse保存了请求的回应数据,包含数据本身和头,还有状态码以及其它相关信息。依据请求类型的不同,对回应数据的处理方式也各有不同。比如回应是String和Json的差别。所以自然而然的网络请求类型须要对它获得的回应数据自行处理,也就触发了Request子类的parseNetworkResponse方法。下边以StringRequest为例进行分析:

     @Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
String parsed;
try {
parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
} catch (UnsupportedEncodingException e) {
parsed = new String(response.data);
}
return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
}

StringRequest中对于回应首先尝试解析数据和辨别头数据编码类型,若失败则仅仅解析数据部分。

终于都是触发Request的success方法,參数中还使用Volley自带的HttpHeaderParser对头信息进行了解析。须要看一下Response的success方法到底做了什么,鉴于Response类总共没有多少代码。就所有拿出来做分析了:

 public class Response<T> {

     /** 处理解析过的回应信息的回调接口 */
public interface Listener<T> {
/** 当接收到回应后 */
public void onResponse(T response);
} /** 处理错误回应的回调接口 */
public interface ErrorListener {
/**
* 发生错误时的回调接口
*/
public void onErrorResponse(VolleyError error);
} /** 返回一个包括已解析结果的成功回应 */
public static <T> Response<T> success(T result, Cache.Entry cacheEntry) {
return new Response<T>(result, cacheEntry);
} /**
* 返回错误回应,包括错误码以及可能的其它消息
*/
public static <T> Response<T> error(VolleyError error) {
return new Response<T>(error);
} /** 解析过的响应信息,错误时为null */
public final T result; /** 响应的缓存数据。错误时为null */
public final Cache.Entry cacheEntry; /** 具体的错误信息 */
public final VolleyError error; /** 此回应软件希望得到第二次回应则为true,即须要刷新 */
public boolean intermediate = false; /**
* 返回true代表回应成功。没有错误。有错误则为false
*/
public boolean isSuccess() {
return error == null;
} private Response(T result, Cache.Entry cacheEntry) {
this.result = result;
this.cacheEntry = cacheEntry;
this.error = null;
} private Response(VolleyError error) {
this.result = null;
this.cacheEntry = null;
this.error = error;
}
}

这就是网络响应的类,非常easy。成功或错误都会直接进行标记,通过isSuccess接口提供外部查询。

假设响应成功,则消息保存在result中。解析头信息得到的缓存数据保存在cacheEntry中。

Request作为基类,Volley自带的又代表性的其扩展类又StringRequest和JsonObjectRequest,假设开发人员有比較大的自己定义需求就须要继承Request复写内部一些重要的方法。

同一时候mDelivery出场的机会这么多。为什么他总出如今处理请求的地方呢?下边就对它和Request一起进行分析。当中Request依旧以StringRequest为例。

ExecutorDelivery消息分发者与Request请求

mDelivery类型为ResponseDelivery,实为接口类型:

public interface ResponseDelivery {
/**
* Parses a response from the network or cache and delivers it.
*/
public void postResponse(Request<?> request, Response<?> response); /**
* Parses a response from the network or cache and delivers it. The provided
* Runnable will be executed after delivery.
*/
public void postResponse(Request<?> request, Response<? > response, Runnable runnable); /**
* Posts an error for the given request.
*/
public void postError(Request<? > request, VolleyError error);
}

三个接口当中两个是回应网络应答的,最后一个回应网络错误。追溯RequestQueue构造的时候,默认的分发者为ExecutorDelivery:

     public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}

可见。消息分发者工作在主线程上。常见的分发者所做的工作有:

     @Override
public void postResponse(Request<?> request, Response<?> response) { //发出响应
postResponse(request, response, null);
} @Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { //发出响应
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
} @Override
public void postError(Request<?> request, VolleyError error) { //发出错误响应
request.addMarker("post-error");
Response<? > response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}

这里发现一个问题,事实上在NetworkDispatcher中的request.markDelivered()是多余的,在postResponse中已经运行了。不管是正常的响应还是错误都会运行ResponseDeliveryRunnable:

private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable; public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable; //若指定了runnable。如上面分析的在网络请求有效可是须要更新的时候会指定一个runnable的
} @SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) { //若请求被取消。结束并做标记
mRequest.finish("canceled-at-delivery");
return;
} // Deliver a normal response or error, depending.
if (mResponse.isSuccess()) { //若请求成功则处理回应
mRequest.deliverResponse(mResponse.result);
} else { //若不成功则处理错误
mRequest.deliverError(mResponse.error);
} // If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
} // If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) { //假设指定了额外的runnable这里还会对它进行运行
mRunnable.run();
}
}
}

Delivery作为网络回应的分发、处理者,对回应数据进行了最后一层的把关。而当Delivery查询回应是否成功时,由于Request已经对回应信息做过处理(检查其成功还是错误),所以能够查询到正确的状态。

若查询到回应成功则会触发Request的deliverResponse方法(以StringRequest为例):

     @Override
protected void deliverResponse(String response) {
mListener.onResponse(response);
}

事实上就是触发了用户自己定义的网络响应监听器,mListener在StringRequest的构造中进行赋值:

     public StringRequest(int method, String url, Listener<String> listener,
ErrorListener errorListener) {
super(method, url, errorListener);
mListener = listener;
} public StringRequest(String url, Listener<String> listener, ErrorListener errorListener) {
this(Method.GET, url, listener, errorListener);
}

当查询到网络回应数据不成功时候将触发Request的deliverError方法,这种方法StringRequest并没有复写,所以追溯到其父类Request中:

     public void deliverError(VolleyError error) {
if (mErrorListener != null) {
mErrorListener.onErrorResponse(error);
}
}

这里mErrorListener也是用户在使用Volley时候自定的错误监听器。在StringRequest中并没有处理,是通过super运行Request的构造方法进行赋值的:

     public Request(int method, String url, Response.ErrorListener listener) {
mMethod = method;
mUrl = url;
mErrorListener = listener;
setRetryPolicy(new DefaultRetryPolicy()); mDefaultTrafficStatsTag = findDefaultTrafficStatsTag(url);
}

当这个请求已经完整的确定完毕后,Delivery会通知Request进行结束操作——finish:

     void finish(final String tag) {
if (mRequestQueue != null) { //若请求队列有效,则在请求队列中标记当前请求为结束
mRequestQueue.finish(this);
} //之后都是日志相关。不做分析
if (MarkerLog.ENABLED) {
final long threadId = Thread.currentThread().getId();
if (Looper.myLooper() != Looper.getMainLooper()) {
// If we finish marking off of the main thread, we need to
// actually do it on the main thread to ensure correct ordering.
Handler mainThread = new Handler(Looper.getMainLooper());
mainThread.post(new Runnable() {
@Override
public void run() {
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
}
});
return;
} mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
} else {
long requestTime = SystemClock.elapsedRealtime() - mRequestBirthTime;
if (requestTime >= SLOW_REQUEST_THRESHOLD_MS) {
VolleyLog.d("%d ms: %s", requestTime, this.toString());
}
}
}

mRequestQueue为RequestQueue类型,在开篇中就分析了RequestQueue。相关的另一个问题当时没有进行挖掘,即mWaitingQueue中保留的同样URL的多个请求层级何时才可以被触发。下边分析mRequestQueue的finish方法就能解开这个疑问了:

     void finish(Request<?

> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request); //当请求已完毕。会从mCurrentRequests队列中被移除掉
} if (request.shouldCache()) { //默认是true的。除非你调用Request的setShouldCache方法主动设定
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); //获取cacheKey,前边说过就是URL
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey); //移除列表中的这个请求,同一时候取出其可能存在的层级关系
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests); //若真的有层级关系。那么将其它的请求所有增加到mCacheQueue中交由CacheDispatcher处理
}
}
}
}

好了,终于待定的问题也攻克了,这就是一个Request网络请求在Volley中的来龙去脉。

总结

1. 当一个RequestQueue被成功申请后会开启一个CacheDispatcher(缓存调度器)和4个(默认)NetworkDispatcher(网络请求调度器)。

2. CacheDispatcher缓存调度器最为第一层缓冲。開始工作后堵塞的从缓存序列mCacheQueue中取得请求:

a. 对于已经取消了的请求。直接标记为跳过并结束这个请求

b. 全新或过期的请求。直接丢入mNetworkQueue中交由N个NetworkDispatcher进行处理

c. 已获得缓存信息(网络应答)却没有过期的请求。交由Request的parseNetworkResponse进行解析,从而确定此应答是否成功。

然后将请求和应答交由Delivery分发者进行处理,假设须要更新缓存那么该请求还会被放入mNetworkQueue中

3. 用户将请求Request add到RequestQueue之后:

a. 对于不须要缓存的请求(须要额外设置,默认是须要缓存)直接丢入mNetworkQueue交由N个NetworkDispatcher处理。

b. 对于须要缓存的。全新的请求增加到mCacheQueue中给CacheDispatcher处理

c. 须要缓存,可是缓存列表中已经存在了同样URL的请求,放在mWaitingQueue中做临时雪藏,待之前的请求完成后。再又一次加入到mCacheQueue中;

4. 网络请求调度器NetworkDispatcher作为网络请求真实发生的地方。对消息交给BasicNetwork进行处理,相同的,请求和结果都交由Delivery分发者进行处理;

5. Delivery分发者实际上已经是对网络请求处理的最后一层了。在Delivery对请求处理之前,Request已经对网络应答进行过解析,此时应答成功与否已经设定。而后Delivery依据请求所获得的应答情况做不同处理:

a. 若应答成功,则触发deliverResponse方法,终于会触发开发人员为Request设定的Listener

b. 若应答失败,则触发deliverError方法,终于会触发开发人员为Request设定的ErrorListener

处理完后。一个Request的生命周期就结束了。Delivery会调用Request的finish操作,将其从mRequestQueue中移除,与此同一时候,假设等待列表中存在同样URL的请求。则会将剩余的层级请求所有丢入mCacheQueue交由CacheDispatcher进行处理。

一个Request的生命周期:

1. 通过add增加mRequestQueue中,等待请求被运行。

2. 请求运行后,调用自身的parseNetworkResponse对网络应答进行处理,并推断这个应答是否成功;

3. 若成功,则终于会触发自身被开发人员设定的Listener;若失败。终于会触发自身被开发人员设定的ErrorListener。

至此Volley中网络请求的来龙去脉分析清楚了。假设我们由于一些原因须要继承Request来自己定义自己的Request,最须要注意的就是parseNetworkResponse方法的复写。此方法对请求之后的命运有决定性的作用。