我想接受对Flask的多个并发请求。API目前正在通过POST方法获取“公司名称”,并调用爬虫引擎,每个爬行过程需要5-10分钟才能完成。我想运行多个爬虫引擎并行为不同的各自的许多要求。我跟踪了this,但没能让它起作用。目前,第二请求正在取消第一请求。如何实现这种并行性?
当前的API实现:
app.py
app = Flask(__name__)
app.debug = True
@app.route("/api/v1/crawl", methods=['POST'])
def crawl_end_point():
if not request.is_json:
abort(415)
inputs = CompanyNameSchema(request)
if not inputs.validate():
return jsonify(success=False, errros=inputs.errors)
data = request.get_json()
company_name = data.get("company_name")
print(company_name)
if company_name is not None:
search = SeedListGenerator(company_name)
search.start_crawler()
scrap = RunAllScrapper(company_name)
scrap.start_all()
subprocess.call(['/bin/bash', '-i', '-c', 'myconda;scrapy crawl company_profiler;'])
return 'Data Pushed successfully to Solr Index!', 201
if __name__ == "__main__":
app.run(host="10.250.36.52", use_reloader=True, threaded=True)gunicorn.sh
#!/bin/bash
NAME="Crawler-API"
FLASKDIR=/root/Public/company_profiler
SOCKFILE=/root/Public/company_profiler/sock
LOG=./logs/gunicorn/gunicorn.log
PID=./guincorn.pid
user=root
GROUP=root
NUM_WORKERS=10 # generally in the 2-4 x $(NUM_CORES)+1 range
TIMEOUT=1200
#preload_apps = False
# The maximum number of requests a worker will process before restarting.
MAX_REQUESTS=0
echo "Starting $NAME"
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your gunicorn
exec gunicorn app:app -b 0.0.0.0:5000 \
--name $NAME \
--worker-class gevent \
--workers 5 \
--keep-alive 900 \
--graceful-timeout 1200 \
--worker-connections 5 \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level info \
--backlog 0 \
--pid=$PID \
--access-logformat='%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s' \
--error-logfile $LOG \
--log-file=-提前感谢!
发布于 2018-04-10 15:35:05
更好的方法-使用与Redis或类似的作业队列。您可以为作业创建队列,获取结果,并通过API请求组织与前端的交换。每一项工作都将在不同的过程中工作,而不会阻碍主应用程序。在其他情况下,您需要解决每一步都存在瓶颈的问题。
良好的实现- Redis或烧瓶-Redis。
http://python-rq.org/
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work() from redis import Redis
from rq import Queue
q = Queue(connection=Redis())
def crawl_end_point():
...
#adding task to queue
result = q.enqueue(crawl_end_point, timeout=3600)
#simplest way save id of job
session['j_id'] = result.get_id()
#get job status
job = Job.fetch(session['j_id'], connection=conn)
job.get_status()
#get job results
job.result此外,您也可以为此目的检查芹菜:https://stackshare.io/stackups/celery-vs-redis
https://stackoverflow.com/questions/49754413
复制相似问题