Spark BigData Program:大数据实时流处理日志

一、项目内容

  • 编写python脚本,源源不断产生学习网站的用户行为日志。
  • 启动 Flume 收集产生的日志。
  • 启动 Kafka 接收 Flume 接收的日志。
  • 使用 Spark Streaming 消费 Kafka 的用户日志。
  • Spark Streaming将数据清洗过滤非法数据,然后分析日志中用户的访问课程,统计各个课程的用户搜索量
  • 将 Spark Streaming 处理的结果写入 MySQL数据库 中。
  • 前端使用 Django 整合作为数据展示平台。
  • 使用Ajax异步传输数据到Html页面,并使用 Echarts 框架展示数据。
  • 本实战使用IDEA2019作为开发工具,JDK版本为1.8,Scala版本为2.11,python版本为3.7.

二、需求分析

本项目综合大数据的实时流处理和离线处理进行实战

大数据实时流处理特点:

  • 海量数据不间断的产生
  • 需要对海量的实时数据进行实时处理
  • 处理完的数据结果实时写入数据库

大数据离线处理特点:

  • 数据量巨大且保存时间长
  • 需要在大量数据上进行复杂的批处理运算
  • 数据在处理前与处理过程中不会发生变化

三、项目架构

针对上述需求,本项目采用 Flume+Kafka+Spark+MySQL+Django 的架构

在这里插入图片描述

四、数据源(DataSource)

python数据源

  • 默认数据保存路径**/usr/app/BigData/StreamingComputer/log**
  • 默认日志生产速率 200条/秒
  • 默认日志错误率8%
  • 默认以时间戳为文件名
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import random
import time
import sys

url_paths = [
	"class/112.html",
	"class/128.html",
	"class/145.html",
	"class/146.html",
	"class/500.html",
	"class/250.html",
	"class/131.html",
	"class/130.html",
	"class/271.html",
	"class/127.html",
	"learn/821",
	"learn/823",
	"learn/987",
	"learn/500",
	"course/list"
]
ip_slices = [
	132, 156, 124, 10, 29, 167, 143, 187, 30, 46,
	55, 63, 72, 87, 98, 168, 192, 134, 111, 54, 64, 110, 43
]
http_refer = [
	"http://www.baidu.com/s?wd={query}",
	"https://www.sogou.com/web?query={query}",
	"http://cn.bing.com/search?q={query}",
	"https://search.yahoo.com/search?p={query}",
]
search_keyword = [
	"SparkSQL",
	"Hadoop",
	"Storm",
	"Flume",
	"Python",
	"MySql",
	"Linux",
	"HTML",
]
status_codes = [
	"200", "404", "500", "403"
]


# 随机生成ip
def get_ip():
	return '{}.{}.{}.{}'.format(
		random.choice(ip_slices),
		random.choice(ip_slices),
		random.choice(ip_slices),
		random.choice(ip_slices)
	)


# 随机生成url
def get_url():
	return '"/GET {}"'.format(random.choice(url_paths))


# 随机生成refer
def get_refer():
	if random.uniform(0, 1) > 0.92:
		return "Na"
	return random.choice(http_refer).replace('{query}', random.choice(search_keyword))


# 随机生成状态码
def get_code():
	return random.choice(status_codes)


# 获取当前时间
def get_time():
	return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))


# 生成日志数据
def get_log_data():
	return '{} {} {} {} {}\n'.format(
		get_ip(),
		get_time(),
		get_url(),
		get_code(),
		get_refer()
	)


# 保存日志数据
def save(n):
	count = 0
	print(get_time()[0: 10]+'\tDataSource Server has been prepared..')
	fp = open(
		'/usr/app/BigData/StreamingComputer/log/{}.log'.format(int(time.time())),
		'w+',
		encoding='utf-8'
	)
	for i in range(n):
		fp.write(get_log_data())
		count += 1
		time.sleep(0.005)
		if count > 8000:
			count = 0
			fp.close()
			fp = open(
				'/usr/app/BigData/StreamingComputer/log/{}.log'.format(int(time.time())),
				'w+',
				encoding='utf-8'
			)
	fp.close()


if __name__ == '__main__':
	save(int(sys.argv[1]))

Scala数据源

  • 默认数据保存路径**/usr/app/BigData/StreamingComputer/log**
  • 默认以时间戳为文件名
  • 日志错误率为 0%
  • 日志保存路径、生产速率与数量可手动传参
package common

import java.io.FileOutputStream
import java.text.SimpleDateFormat
import java.util.Date

import scala.util.Random

object DataProducer {
  val random = new Random()
  val urlPath: Array[String] = Array(
    "class/112.html",
    "class/128.html",
    "class/145.html",
    "class/146.html",
    "class/500.html",
    "class/250.html",
    "class/131.html",
    "class/130.html",
    "class/271.html",
    "class/127.html",
    "learn/821",
    "learn/823",
    "learn/987",
    "learn/500",
    "course/list")
  val ipSlice: Array[Int] = Array(
    132,156,124,10,29,167,143,187,30,46,55,63,72,87,98,168,192,134,111,54,64,110,43
  )
  val httpRefers: Array[String] = Array(
    "http://www.baidu.com/s?wd={query}",
    "https://www.sogou.com/web?query={query}",
    "http://cn.bing.com/search?q={query}",
    "https://search.yahoo.com/search?p={query}",
  )
  val keyWords: Array[String] = Array(
    "Spark SQL实战",
    "Hadoop生态开发",
    "Storm实战",
    "Spark Streaming实战",
    "python从入门到入狱",
    "Shell从入门到如图",
    "Linux从入门到放弃",
    "Vue.js"
  )
  val stateCode: Array[String] = Array(
    "200",
    "404",
    "500",
    "403"
  )
  var count = 0
  def main(args: Array[String]): Unit = {
    if(args.length !=3) throw new Exception("arguments error: arg must be 3") else run(args)
  }
  def getIp: String = s"${ipSlice(random.nextInt(ipSlice.length))}.${ipSlice(random.nextInt(ipSlice.length))}.${ipSlice(random.nextInt(ipSlice.length))}.${ipSlice(random.nextInt(ipSlice.length))}"
  def getTime: String = new SimpleDateFormat("YYYY-MM-dd HH:mm:ss.[SSS]").format(new Date())
  def getRequestRow: String ="\""+s"/GET ${urlPath(random.nextInt(urlPath.length))}"+"\""
  def getRequestUrl:String = s"${httpRefers(random.nextInt(httpRefers.length)).replace("{query}",keyWords(random.nextInt(keyWords.length)))}"
  def getStateCode:String = s"${stateCode(random.nextInt(stateCode.length))}"
  def getLogData:String = s"$getIp $getTime $getRequestRow $getStateCode $getRequestUrl" + "\n"
  def run(args: Array[String]): Unit ={
    println(s"${new SimpleDateFormat("YYYY-MM-dd HH:mm:ss").format(new Date())} DataSource Server has been prepared")
    var out = new FileOutputStream(args(0)+"/"+ new SimpleDateFormat("YYYY-MM-dd HH:mm:ss").format(new Date())+".log")
    for(i <- 1 to args(2).toInt){
      out.write(getLogData.getBytes)
      out.flush()
      count += 1
      Thread.sleep(1000/args(1).toInt)
      if(count == 3000){
        out.close()
        out = new FileOutputStream(args(0)+"/"+ new SimpleDateFormat("YYYY-mm-DD HH:MM:ss").format(new Date())+".log")
      }
    }
    out.close()
  }
}

数据样本

64.87.98.30 2021-05-28 00:19:58 "/GET course/list" 200 https://search.yahoo.com/search?p=Linux精通
46.132.30.124 2021-05-28 00:19:58 "/GET class/271.html" 500 https://search.yahoo.com/search?p=SparkSQL实战
10.143.143.30 2021-05-28 00:19:58 "/GET class/500.html" 500 https://search.yahoo.com/search?p=SparkSQL实战
168.110.143.132 2021-05-28 00:19:58 "/GET learn/500" 200 https://search.yahoo.com/search?p=HTML前端三剑客
54.98.29.10 2021-05-28 00:19:58 "/GET learn/500" 500 Na
63.168.132.124 2021-05-28 00:19:58 "/GET course/list" 403 https://search.yahoo.com/search?p=HTML前端三剑客
72.98.98.167 2021-05-28 00:19:58 "/GET class/112.html" 404 https://search.yahoo.com/search?p=Python爬虫进阶
29.87.46.54 2021-05-28 00:19:58 "/GET class/146.html" 403 http://cn.bing.com/search?q=Linux精通
43.43.110.63 2021-05-28 00:19:58 "/GET learn/987" 500 http://cn.bing.com/search?q=Linux精通
54.111.98.43 2021-05-28 00:19:58 "/GET course/list" 403 Na
187.29.10.10 2021-05-28 00:19:58 "/GET learn/823" 200 http://cn.bing.com/search?q=SparkSQL实战
10.187.29.168 2021-05-28 00:19:58 "/GET class/146.html" 500 http://www.baidu.com/s?wd=Storm实战

五、采集系统(Flume)

  • 架构采用分布式采集,zoo1、zoo2采集数据源,zoo3对数据进行整合后输送给Kafka
  • **Flume中文文档:**https://flume.liyifeng.org/

zoo1 zoo2

a1.sources=r1
a1.channels=c1
a1.sinks=k1

a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /usr/app/BigData/StreamingComputer/log/.*log.*

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = avro
a1.sinks.k1.hostname = zoo3
a1.sinks.k1.port = 12345

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

zoo3

a1.sources=r1
a1.channels=c1
a1.sinks=k1

a1.sources.r1.type = avro
a1.sources.r1.bind = zoo3
a1.sources.r1.port = 12345

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = log
a1.sinks.k1.kafka.bootstrap.servers = zoo1:9092,zoo2:9092,zoo3:9092
a1.sinks.k1.kafka.flumeBatchSize = 100
a1.sinks.k1.kafka.producer.acks = -1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动命令

bin/flume-ng agent --conf conf --conf-file conf/streaming_avro.conf --name a1 -Dflume.root.logger=INFO,console

六、消息队列(Kafka)

  • **Kfka官方中文文档:**https://kafka.apachecn.org/

  • Kafka集群基于zookeeper运行,zookeeper的安装配置详情百度

server.properties配置

# 每台机器有唯一broker.id,zoo2和zoo3分别设置为2和3
broker.id=1
    host.name=zoo1
    listeners=PLAINTEXT://zoo1:9092
zookeeper.connect=zoo1:2181,zoo2:2182,zoo3:2181
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/app/kafka_2.13-2.8.0/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

Kafka启动指令

bin/kafka-server-start.sh -daemon config/server.properties

Kafka集群启动脚本

#!/bin/bash
for i in {1..3}
do
ssh zoo$i ". /etc/profile;echo '------------node$i--------';/usr/app/kafka_2.13-2.8.0/bin/kafka-server-start.sh -daemon /usr/app/kafka_2.13-2.8.0/config/server.properties"
done

Kafka添加topic主题

 bin/kafka-topics.sh --create --zookeeper zoo1:2181 --replication-factor 1 --partitions 1 --topic log
 
 # 查看主题
 bin/kafka-topics.sh --list --zookeeper zoo1:2181

IDEA消费者模型

maven依赖

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>1.0.0</version>
</dependency>

Scala消费者模型(测试)

package kafka

import java.util
import java.util.Properties

import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord, ConsumerRecords, KafkaConsumer}
import org.apache.kafka.common.serialization.StringDeserializer

object Consumer {
  val bootstrapServer = "zoo1:9092,zoo2:9092,zoo3:9092"
  val topic = "log"
  def main(args: Array[String]): Unit = {
    val pop = new Properties()
    // 建立初始连接到Kafka集群的"主机/端口对"配置列表
    pop.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer)
    // 指定Key的解析序列化接口实现类, 默认为
    pop.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, classOf[StringDeserializer].getName)
    // 指定Value的解析序列化接口实现类, 默认为
    pop.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, classOf[StringDeserializer].getName)
    // 消费所属组的唯一,消费者用于订阅或offset管理策略的组管理功能,则此属性是必须的。
    pop.put(ConsumerConfig.GROUP_ID_CONFIG, "001")

    /**
     * 拉取请求返回的最小数据量,如果数据不足,请求将等待数据积累。
     * 默认设置为1字节,表示只要单个字节的数据可用或者读取等待请求超时,就会应答读取请求。
     * 将此值设置的越大将导致服务器等待数据累积的越长,这可能以一些额外延迟为代价提高服务器吞吐量。
     */
    pop.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,1)
    val consumer = new KafkaConsumer[String, String](pop)
    //订阅主题
    val list = new util.ArrayList[String]()
    list.add(topic)
    consumer.subscribe(list)
    //消费数据
    while (true) {
      val value: ConsumerRecords[String, String] = consumer.poll(100)
      val value1: util.Iterator[ConsumerRecord[String, String]] = value.iterator()
      while (value1.hasNext) {
        val value2: ConsumerRecord[String, String] = value1.next()
        println(value2.key() + "," + value2.value())
      }
    }
  }
}

七、流式实时计算(Struct Stream)

  • scala 版本 : 2.12.10
  • spark版本 : 3.0.0

Struct Streaming实时流式处理

package bin

import java.sql.{Connection,Timestamp}
import java.text.SimpleDateFormat

import common.DataBaseConnection
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import org.apache.spark.sql.functions._


object Computer {
  // Kafka集群地址 和 主题
  val bootstrapServer = "zoo1:9092,zoo2:9092,zoo3:9092"
  val topic = "log"
  // 时间转换对象
  val ft = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
  var sql = "delete from app01_data where 1=1  limit 8;"
  val spark: SparkSession = SparkSession.builder().appName("log")
    .config(new SparkConf().setMaster("spark://zoo1:7077")).getOrCreate()

  import spark.implicits._

  // main 方法
  def main(args: Array[String]): Unit = run()

  // 程序入口
  private def run(): Unit = {
    val res = transData(spark)
    val windowCounts: DataFrame = setWindow(res)
    scannerData()
    startProcess(windowCounts)
  }

  // 启动StructStreaming流式计算并写入数据库
  private def startProcess(windowCounts: DataFrame): Unit = {
    val query = windowCounts
      .writeStream
      .outputMode("Complete")
      .foreachBatch((data: DataFrame, id: Long) => {
        data.groupBy("course").max("count")
          .withColumnRenamed("max(count)", "count")
          .write
          .format("jdbc")
          .option("url", "jdbc:mysql://zoo1:3306/streaming_computer?useSSL=false")
          .option("dbtable", "app01_data")
          .option("user", "root")
          .option("password", "1234")
          .mode(SaveMode.Append)
          .save()
      })
      .start()

    query.awaitTermination()
    query.stop()
  }
  // 设置事件窗口大小
  private def setWindow(res: DataFrame) = {
    val windowCounts: DataFrame = res.withWatermark("timestamp", "60 minutes")
      .groupBy(window($"timestamp", "30 minutes", "10 seconds"), $"course")
      .count()
      .drop("window")
    windowCounts
  }
  // 清洗处理数据,转换数据格式
  def transData(spark: SparkSession): DataFrame = {
    val data = spark
      .readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", bootstrapServer)
      .option("subscribe", topic)
      .load()
    // 2.转换数据格式,获取所需数据 course+timestamp
    val res = data.selectExpr("CAST(value AS STRING)")
      .as[String]
      .filter(!_.contains("Na"))
      .map(line => (line.split(" ")(6).split("=")(1), new Timestamp(ft.parse(line.split(" ")(1) + " " + line.split(" ")(2)).getTime)))
      .toDF("course", "timestamp")
    res
  }
  // 扫描数据库
  def scannerData(): Unit = {
    new Thread(() => {
      val DBCon: Connection = new DataBaseConnection(
        "jdbc:mysql://zoo1:3306/streaming_computer?useSSL=false",
        "root",
        "1234"
      ).getConnection
      var len = 0
      while (true) {
        val pst = DBCon.prepareStatement(s"select count(*) from app01_data;")
        val res = pst.executeQuery()
        while (res.next()) {
          len = res.getInt(1)
        }
        if (len > 16) {
          DBCon.prepareStatement(sql).execute()
        }
        Thread.sleep(3000)
      }
    }).start()
  }
}

jdbc工具类

package common

import java.sql.{Connection, DriverManager}

class DataBaseConnection(url:String, user:String, password:String) {
  def getConnection: Connection ={
    Class.forName("com.mysql.jdbc.Driver")
    var con:Connection = DriverManager.getConnection(url, user, password)
    con
  }
}

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>chaney02_BigData</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <scala.version>2.12.10</scala.version>
        <spark.version>3.0.0</spark.version>
        <encoding>UTF-8</encoding>
    </properties>
    <dependencies>
        <!--导入scala的依赖-->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <!-- kafka -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql-kafka-0-10_2.12</artifactId>
            <version>3.0.0</version>
        </dependency>
        <!-- spark -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.12</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.12</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.38</version>
        </dependency>
    </dependencies>
    <build>
        <!--scala待编译的文件目录-->
        <sourceDirectory>src/main/scala</sourceDirectory>
        <!--scala插件-->
        <plugins>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                        <configuration>
                            <args>
                                <!--<arg>-make:transitive</arg>--><!--scala2.11 netbean不支持这个参数-->
                                <arg>-dependencyfile</arg>
                                <arg>${project.build.directory}/.scala_dependencies</arg>
                            </args>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

            <!--manven打包插件-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <transformers>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                                    <resource>reference.conf</resource>
                                </transformer>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass>bin.Computer</mainClass> <!--main方法-->
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

八、前端数据可视化(Django+EChars)

  • Django版本: 1.11.11
  • Python版本: 3.7.0

index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>ChaneyBigData</title>
    <link href="https://cdn.jsdelivr.net/npm/bootstrap@3.3.7/dist/css/bootstrap.min.css" rel="stylesheet">
    <script src="https://cdn.jsdelivr.net/npm/jquery@1.12.4/dist/jquery.min.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/bootstrap@3.3.7/dist/js/bootstrap.min.js"></script>
    <script src="https://cdn.bootcss.com/echarts/4.2.1-rc1/echarts.min.js"></script>

</head>
<body>
<nav class="navbar navbar-inverse text-center">
    <div class="page-header" style="color: white">
        <h1>Spark BigData Program:大数据实时流处理日志 &nbsp;&nbsp;&nbsp;&nbsp;
            <small>
                <span class="glyphicon glyphicon-send center" aria-hidden="true">&nbsp;</span>
                <span class="glyphicon glyphicon-send center" aria-hidden="true">&nbsp;</span>
                <span class="glyphicon glyphicon-send center" aria-hidden="true">&nbsp;</span>
            </small>
        </h1>

    </div>
</nav>
<div class="container">
    <div class="row">
        <div class="col-md-10 col-md-offset-1">
            <div class="jumbotron">
                <h1>Real Time Course Selection</h1>
                <div id="main" style="width: 800px;height:300px;"></div>
                <script type="text/javascript">

                </script>
            </div>
        </div>
    </div>
    <hr>
    <div class="row">
        <div class="col-md-8 col-md-offset-2">
            <span><h4 class="text-center">
                <span class="glyphicon glyphicon-home" aria-hidden="true">&nbsp;</span>
                Chaney.BigData.com&nbsp;&nbsp;&nbsp;&nbsp;
                <span class="glyphicon glyphicon-envelope" aria-hidden="true">&nbsp;</span>
                Email: 133798276@yahoo.com
                </h4>
            </span>
        </div>
    </div>
</div>
<script>
    $(function () {
        flush()
    });
    function flush() {
        setTimeout(flush, 10000);
        $.ajax({
            url: "http://127.0.0.1:8000/",
            type: "post",
            data: 1,
            // 两个关键参数
            contentType: false,
            processData: false,
            success: function (data) {
                const myChart = echarts.init(document.getElementById('main'));
                const option = {
                    title: {},
                    tooltip: {},
                    legend: {},
                    xAxis: {
                        data: data.course
                    },
                    yAxis: {},
                    series: [{
                    type: 'bar',
                    data: [
                        {value: data.count[0],itemStyle: {color: '#00FFFF'}},
                        {value: data.count[1],itemStyle: {color: '#000000'}},
                        {value: data.count[2],itemStyle: {color: '#cff900'}},
                        {value: data.count[3],itemStyle: {color: '#cf0900'}},
                        {value: data.count[4],itemStyle: {color: '#d000f9'}},
                        {value: data.count[5],itemStyle: {color: '#FF7F50'}},
                        {value: data.count[6],itemStyle: {color: '#FF1493'}},
                        {value: data.count[7],itemStyle: {color: '#808080'}},

                        ]
                    }]
                };
                myChart.setOption(option);
            }
        });
    }

</script>
</body>
</html>

Django

models.py
from django.db import models


class Data(models.Model):
	course = models.CharField(verbose_name='课程名', max_length=255)
	count = models.BigIntegerField(verbose_name='选课人数')
views.py
from django.http import JsonResponse
from django.shortcuts import render
from app01 import models


def home(request):
	if request.method == 'POST':
		back_dic = {
			"course": [],
			"count": []
		}
		data = models.Data.objects.all()[:8]
		for res in data:
			back_dic["course"].append(res.course)
			back_dic["count"].append(res.count)
		return JsonResponse(back_dic)
	return render(request, "index.html", locals())

九、项目展示

图表每10秒自动刷新一次

在这里插入图片描述

@Authorchaney

@Bloghttps://blog.csdn.net/wangshu9939

Logo

永洪科技,致力于打造全球领先的数据技术厂商,具备从数据应用方案咨询、BI、AIGC智能分析、数字孪生、数据资产、数据治理、数据实施的端到端大数据价值服务能力。

更多推荐