Simplepie RSS提供给MySQL数据库服务器

时间:2022-06-14 01:16:48

I am currently using Simplepie to pull in my RSS feeds as shown in the configuration below. I want to move the $url to my database because my site loads way too slow. I have each url as a key value pair to the site's name. I want to keep this association because I use for instance "abc" to pull the image out of my directory which I use to style it for each feed, as you can see in the foreach loop below.

我目前正在使用Simplepie来提取我的RSS源,如下面的配置所示。我想将$ url移动到我的数据库,因为我的网站加载速度太慢。我将每个url作为网站名称的键值对。我想保留这种关联,因为我使用例如“abc”将图像拉出我的目录,我用它来为每个Feed设置样式,如下面的foreach循环中所示。

My question is, since I am not that clear on how arrays and tables work together, how would I rewrite this script to work with the database the same way?

我的问题是,由于我不清楚数组和表如何协同工作,我将如何重写此脚本以同样的方式使用数据库?

I should also mention that I have already made a table in MySQL with the rows "id" "name" and "url". Any clarification will help.

我还要提一下,我已经在MySQL中创建了一个带有“id”“name”和“url”行的表。任何澄清都会有所帮助。

<?php
require_once('php/autoloader.php');
$feed = new SimplePie();
// Create a new instance of SimplePie
// Load the feeds
$urls = array(
  'http://abcfamily.go.com/service/feed?id=774372' => 'abc',
  'http://www.insideaolvideo.com/rss.xml' => 'aolvideo',
  'http://feeds.bbci.co.uk/news/world/rss.xml' => 'bbcwn',
  'http://www.bing.com' => 'bing',
  'http://www.bravotv.com' => 'bravo',
  'http://www.cartoonnetwork.com' => 'cartoonnetwork',
  'http://feeds.cbsnews.com/CBSNewsMain?format=xml' => 'cbsnews',
  'http://www.clicker.com/' => 'clicker',
  'http://feeds.feedburner.com/cnet/NnTv?tag=contentBody.1' => 'cnet',
  'http://www.comedycentral.com/' => 'comedycentral',
  'http://www.crackle.com/' => 'crackle',
  'http://www.cwtv.com/feed/episodes/xml' => 'cw',
  'http://disney.go.com/disneyxd/' => 'disneyxd',
  'http://www.engadget.com/rss.xml' => 'engadget',
  'http://syndication.eonline.com/syndication/feeds/rssfeeds/video/index.xml' => 'eonline',
  'http://sports.espn.go.com/espn/rss/news' => 'espn',
  'http://facebook.com' => 'facebook',
  'http://flickr.com/espn/rss/news' => 'flickr',
  'http://www.fxnetworks.com//home/tonight_rss.php' => 'fxnetworks',
  'http://www.hgtv.com/' => 'hgtv',
  'http://www.history.com/this-day-in-history/rss' => 'history',
  'http://rss.hulu.com/HuluRecentlyAddedVideos?format=xml' => 'hulu',
  'http://rss.imdb.com/daily/born/' => 'imdb',
  'http://www.metacafe.com/' => 'metacafe',
  'http://feeds.feedburner.com/Monkeyseecom-NewestVideos?format=xml' => 'monkeysee',
  'http://pheedo.msnbc.msn.com/id/18424824/device/rss/' => 'msnbc',
  'http://www.nationalgeographic.com/' => 'nationalgeographic',
  'http://dvd.netflix.com/NewReleasesRSS' => 'netflix',
  'http://feeds.nytimes.com/nyt/rss/HomePage' => 'newyorktimes',
  'http://www.nick.com/' => 'nickelodeon',
  'http://www.nickjr.com/' => 'nickjr',
  'http://www.pandora.com/' => 'pandora',
  'http://www.pbskids.com/' => 'pbskids',
  'http://www.photobucket.com/' => 'photobucket',
  'http://feeds.reuters.com/Reuters/worldNews' => 'reuters',
  'http://www.revision3.com/' => 'revision3',
  'http://www.tbs.com/' => 'tbs',
  'http://www.theverge.com/rss/index.xml' => 'theverge',
  'http://www.tntdrama.com/' => 'tnt',
  'http://www.tvland.com/' => 'tvland',
  'http://www.vimeo.com/' => 'vimeo',
  'http://www.vudu.com/' => 'vudu',
  'http://feeds.wired.com/wired/index?format=xml' => 'wired',
  'http://www.xfinitytv.com/' => 'xfinitytv',
  'http://www.youtube.com/topic/4qRk91tndwg/most-popular#feed' => 'youtube',
);
$feed->set_feed_url(array_keys($urls));
$feed->enable_cache(true);
$feed->set_cache_location('cache');
$feed->set_cache_duration(1800); // Set the cache time
$feed->set_item_limit(1);
$success = $feed->init(); // Initialize SimplePie
$feed->handle_content_type(); // Take care of the character encoding
?>
<?php require_once("inc/connection.php"); ?>
<?php require_once("inc/functions.php"); ?>
<?php include("inc/header.php"); ?>
<?php
// Sort it
$feed_items = array();
// $feed_items is an array
$items = $feed->get_items();
//$items is everything that $items = $feed->get_items(); produces
$urls = array_unique($urls);
// $url = is an empty $
foreach ($urls as $url => $image) {
  $unset = array();
  $feed_items[$url] = array();
  foreach ($items as $i => $item) {
    if ($item->get_feed()->feed_url == $url) {
      $feed_items[$url][] = $item;
      $unset[] = $i;
    }
  }
  foreach ($unset as $i) {
    unset($items[$i]);
  }
}
foreach ($feed_items as $feed_url => $items) {
  if (empty($items)) { ?>
  <div class="item element" data-symbol="<?php echo $urls[$feed_url] ?>" name="<?php echo $urls[$feed_url] ?>">
  <div class="minimise"><img src="images/boreds/<?php echo $urls[$feed_url] ?>.png"/>
  <div class="minimise2">
    <a href="<?php echo $feed_url; ?>"><h2>Visit <?php echo $urls[$feed_url] ?> now!</h2></a>
  </div></div>
  <div class="maximise">
    <a href="<?php echo $feed_url; ?>"><h2>Visit <?php echo $urls[$feed_url] ?> now!</h2></a>
  </div></div>

  <?
    continue;
  }
  $first_item = $items[0];
  $feed = $first_item->get_feed();
  ?>

  <?php

$feedCount = 0;
foreach ($items as $item) {
  $feedCount++;
  ?>
<div class="item element" " data-symbol="<?php echo $urls[$feed_url] ?>" name="<?php echo $urls[$feed_url] ?>">
<div class="minimise"><strong id="amount"><?php echo ''.$feedCount; ?></strong>
  <img src="images/boreds/<?php echo $urls[$feed_url] ?>.png"/>
  <div class="minimise2"><a href="<?php echo $item->get_permalink(); ?>">
  <h2><?php echo $item->get_title(); ?></h2></a>
</div></div>
<div class="maximise"><a href="<?php echo $item->get_permalink(); ?>">
   <h2><?php echo $item->get_title(); ?></h2></a><br><p><?php echo $item->get_description(); ?></p>
</div></div>
<?php
  }
}
?>
<?php require("inc/footer2.php"); ?>

1 个解决方案

#1


1  

I use SimplePie within some projects and this is what I do to reduce de page load time: I cache the results.

我在一些项目中使用SimplePie,这是我减少页面加载时间的方法:我缓存结果。

The flow is, approx., like this:

这个流程大约是这样的:

  1. I put the PHP code that fetch the feeds and produces the HTML in one place. This script is basically equivalent to yours. Say I rename that script to feed-fetcher.php
  2. 我把用于获取提要的PHP代码放在一个地方并生成HTML。这个脚本基本上等同于你的脚本。假设我将该脚本重命名为feed-fetcher.php

  3. In this script, I add a feature that saves the generated HTML into something like index.html in the same folder. Indeed, in the root folder where I want to access the feed items content for easy (or future) reading.
  4. 在这个脚本中,我添加了一个功能,可以将生成的HTML保存到同一文件夹中的index.html之类的内容中。实际上,在我想要访问feed项内容的根文件夹中,以便于(或将来)阅读。

  5. I set up a cron job that fetches the feed every 20 minutes or so.
  6. 我设置了一个每20分钟左右取一次进给的cron作业。

  7. When I want to read the fetched content, I go to the URL of the cached content, stored in the .html file mentioned above. Thus, my page load is really fast as I dont need to wait for the fetching, and processing of every single feed I want to check.
  8. 当我想读取提取的内容时,我会转到缓存内容的URL,存储在上面提到的.html文件中。因此,我的页面加载速度非常快,因为我不需要等待提取,并处理我想要检查的每个提要。

This solution does not require any database. But of course you can still use a database to store the fetched content formated in HTML as a string, or any other type that suits your needs.

此解决方案不需要任何数据库。但是,当然您仍然可以使用数据库将以HTML格式化的已获取内容存储为字符串,或任何其他适合您需求的类型。

Is this what you are looking for?

这是你想要的?

#1


1  

I use SimplePie within some projects and this is what I do to reduce de page load time: I cache the results.

我在一些项目中使用SimplePie,这是我减少页面加载时间的方法:我缓存结果。

The flow is, approx., like this:

这个流程大约是这样的:

  1. I put the PHP code that fetch the feeds and produces the HTML in one place. This script is basically equivalent to yours. Say I rename that script to feed-fetcher.php
  2. 我把用于获取提要的PHP代码放在一个地方并生成HTML。这个脚本基本上等同于你的脚本。假设我将该脚本重命名为feed-fetcher.php

  3. In this script, I add a feature that saves the generated HTML into something like index.html in the same folder. Indeed, in the root folder where I want to access the feed items content for easy (or future) reading.
  4. 在这个脚本中,我添加了一个功能,可以将生成的HTML保存到同一文件夹中的index.html之类的内容中。实际上,在我想要访问feed项内容的根文件夹中,以便于(或将来)阅读。

  5. I set up a cron job that fetches the feed every 20 minutes or so.
  6. 我设置了一个每20分钟左右取一次进给的cron作业。

  7. When I want to read the fetched content, I go to the URL of the cached content, stored in the .html file mentioned above. Thus, my page load is really fast as I dont need to wait for the fetching, and processing of every single feed I want to check.
  8. 当我想读取提取的内容时,我会转到缓存内容的URL,存储在上面提到的.html文件中。因此,我的页面加载速度非常快,因为我不需要等待提取,并处理我想要检查的每个提要。

This solution does not require any database. But of course you can still use a database to store the fetched content formated in HTML as a string, or any other type that suits your needs.

此解决方案不需要任何数据库。但是,当然您仍然可以使用数据库将以HTML格式化的已获取内容存储为字符串,或任何其他适合您需求的类型。

Is this what you are looking for?

这是你想要的?